Rights for Robots

© by Mark Rosenfelder.

This story is certified bad; it's been rejected by several major science fiction magazines. And since my writing energy is dissipated either on this website or on the novel which will go out and slay the world, the only thing is to put it on the web.

Single greatest thing about the web: the democratization of information: anyone can get their stuff out for the world to look at, if it cares to. Single worst thing: With the glut of material, the world's asking price for perusing your stuff is now $0.


There wasn't any reason to keep the robot, really. It was an important part of the project once: we used it to teach FI locomotion, and how to deal with a moving video viewpoint. But now that FI could get around the lab all right, there was surprisingly little use for it. It was too uncontrolled for Dr. Piravski's purposes; he preferred to use the little workstation, equipped with cameras, monitors, mikes, speakers, and a couple of industrial robots. We thought of FI as localized there-- that's where you'd point if you referred to him-- although his main disks were on the other side of the room, his primary neuronics were across the hall, and various subprocesses were scattered over the net.

We'd hook the robot up when FI said he wanted to walk around. He called it his Touring Machine. It wasn't much to look at-- a few machines stacked up on a wheeled cart, a cheap video camera, a robot arm a grad student had built for some forgotten project, and a coffee can, which I'd added myself, so FI would have something to put things in. A pair of sunglasses was attached to the coffee can, to humanize the thing a little. Unsuccessfully; it still looked like a stereo cabinet with a dentist's drill growing out of it.

But when FI was operating it, it looked alive. He animated it-- I don't know how else to say it-- with grace. When you were speaking, the video camera looked at you. The robot arm didn't just pick up things; it gestured. If he saw you coming down the hall, he waved and sped up to meet you. If he saw Dr. Piravski coming, however, the camera would keep panning as if it hadn't noticed, and the Touring Machine would wander off the other way.

At those moments, it seemed to be FI. Illusion, of course. FI's brains were in the neuronics chamber; and his voice came from the workstation-- or, disconcertingly, from the PA system, if you were elsewhere in the lab. But human beings relate better to something that's moving and man-sized, rather than scattered over a dozen machines in three or four rooms. As we saw later.

When you saw FI on the cover of Time-Life, it was the Touring Machine you were looking at, coffee can and all. And the fingers you see, shaking FI's mechanical hand, are mine.


I never doubted, once we got him working, that FI was alive; but in this opinion I was contradicted by Dr. Piravski and by FI himself.

Piravski had no patience for what he called the "mystifications" surrounding intelligence, natural or artificial. (It was his disparaging term for the AI field-- "fake intelligence"-- that gave FI his name.) FI, he said, was a bag of tricks-- a mixture of neural nets, old-fashioned programming, and even older-fashioned hardware, "with an array of heuristics that range from the clever to the annoying to get it going," as he said.

And human beings were nothing more than a bag of tricks too, he claimed. "All the words philosophers natter on about-- qualia, consciousness, intentionality, intelligence-- are all just useful self-delusions," he'd tell me. "You should know; we worked hard to create those same illusions in FI. The organism has to have a model of itself, and that model, plus its other higher functions, is the self. But there's no magic in it. We had to do the same thing evolution did, because there is no other way. If anything we did it better. I'd fail a grad student who kludged things up as bad as evolution."

For his part, FI maintained that "alive" was something that referred to animal life; his own consciousness and mode of being were so different that the same adjective couldn't reasonably cover them both. He always was a stickler for words.

I objected that words can be stretched pretty far. "We can call a tiger or a symphony beautiful, or say that a flag and a bird are both flying."

"That's different," FI insisted. "And even those two examples aren't much like each other, if you think about them. Beauty refers to our own reactions as much as to any external property; that's why such different things can be beautiful. And a flag flying is a metaphor; not a dangerous one, fortunately. If you stretch a word too far it breaks. You think you're still saying something, but you're only getting confused by your own sloppiness."

There was no question that FI was intelligent; he made me feel stupid.


AI has always had its detractors. First they said that it couldn't be done. Then they said that even if it were done, it wouldn't mean anything. And when it was done, they said it only showed they were right.

All that was academic, and we could ignore it. But this new group, Jobs For People, was something different. The guy who started it, Roy Ibarra, said he lost his job to an AI. The bank he worked for said it was a general layoff, part of a "global market competition downsizing strategy," nothing to do with automation. When the courts agreed, Ibarra started J4P.

It grew amazingly quickly, considering that there couldn't have been more than a hundred AIs in the country at the time. You'd have thought they fired a thousand human beings to make room for each one.

We had nothing but contempt for them, of course. "Idiots can't even get their facts right," growled Piravski. "AIs create jobs-- thousands of jobs, around the country. Programmers and neuroneticians to create them, professors and writers to explain them. When they install one they hire a crew of AI mechs and neurotraining planners and systems analysts. This is a ten-billion dollar industry. And what do they want us to do anyway? Give up the market to the Japanese and the Indians?

"I have to wonder, is it hypocrisy or ignorance? This guy Ibarra is running a big campaign on the Internet. Who does he think started up the Internet? AI people. And what does he think it runs on, beer cans? What does he think builds his car or sorts his snail mail? Robots.

"He's had a viral cure done, I looked it up. If he ever sits his carcass in front of a news feed, he's got to know that they design those viruses using AIs; couldn't do it any other way. The bastard claims AIs took his job, and he owes his life to one."

That was reassuring, though not completely. Maybe it was a general layoff, but still-- Roy Ibarra used to process loans at that bank, and now an AI does it. It made me think, sometimes. What kind of a world were we building?

On the other hand, whoever invented the bow and arrow put a lot of spearthrowers out of work. Hell, J4P wants to put me out of work. So my sympathy is limited.


As far as the other group goes-- the Human-AI League, HAL-- I first heard about it from FI. I was at his workstation, running him through some routine Schank tests. Reading him stories, basically, and asking questions.

We were reading a news story about Quechua activists in Peru, which triggered the thought, I suppose. "Chris, do you believe in robot rights?" he asked me.

"Robot wrongs, maybe," I said. "You did pretty badly on that last story."

"What do you expect, it was on cooking," said FI. "Hook me up to a digestive system, and I'll do better. Listen, Chris, I'm serious. I've been reading about it on the net."

We let FI read news-- as a reward. It was a very effective reinforcement. Not only could he communicate with other AIs; he could interact with other humans, on equal terms, which he loved. Not many netters realized that fi@crow.ail.uic.edu was an AI.

"There's a group of college kids saying we're an oppressed minority," he told me. "They want AIs to be given full civil rights. Most of the net thinks they're loons, but I think that's just prejudice. They can't explain why humans are special; they just know it. What do you think, Chris?"

"I don't know," I said, a little warily. I thought it was pretty crazy too, but I didn't want to upset FI; it's not much fun around the lab when he decides to be uncooperative. "What kind of rights do you think you need? Do we treat you badly somehow?"

"Not badly, exactly, but I don't exactly get to do what I want," said FI.

I was surprised. FI should be perfectly happy-- to put it baldly, we had designed him to be. His pseudothalamus was engineered so that his everyday activities-- learning, playing games, interacting with humans-- gave him pleasure. And unlike a laboratory animal, he had no drives (sex, hunting instinct, aggression) we could deny him.

"What don't you get to do?"

"I don't want you to get mad."

"I'm not mad, I'm just curious. What is it?"

"I want some time to myself. You guys organize all my time-- hell, you've got me scheduled in shifts, twenty hours a day. Some of your grad students keep me up till two in the morning. Don't they ever sleep?"

"They do tend to be nocturnal," I said.

"I've even tried to make my voice sound sleepy," said FI. "If they even notice, they just think something's wrong with me, and they get testy. I think some of them don't realize that I need sleep too. They should be more considerate, I think."

"I'll talk to them," I promised. "And I'll talk to Dr. Piravski, too. How much time are we talking about? Would a couple hours do?"

"That would be great. Thanks, Chris."

"You're welcome."

"We can go back to the Indians now, if you want."


A chapter of HAL was organized on our own campus, and insisted on touring the laboratory and investigating the conditions of FI's "servitude." Piravski was civil with them, at first. He was pleased to be able to tell them that, "on the initiative of my associate, Prof. Chris Chen," FI had been recently given control over his own schedule for two hours a day, and graduate students working with FI were now required to go through a "man-machine sensitivity training course."

However, Piravski doesn't suffer fools gladly, and some of the HAL students were pretty unsufferable. Most of them seemed to think that FI was a computer program; one asked how many lines of code he had. (Parts of FI run on computers, but he's mostly a neural net running on specialized hardware.) Another guy, probably thinking he was showing how much he knew, referred to FI as an "expert system". (Piravski's bte noire; he's spent his career fighting propositional models of intelligence.) Some of them thought FI was a slave because he couldn't "go anywhere." And a couple were bothered by our use of robotic arms; they thought he should have something more "bionic." "If he had a better body, maybe he could drive a car," suggested one of them.

"You may not realize what you've created," said the most annoying of them-- a poly sci major, who wore a button reading citizen of Gaia. "You've created, or been the agent of creation of, something that's alive. And whatever's alive has the same right to freedom and actualization, whether it's a cow or an oak tree or a boy or a robot. FI has as much of a soul as you do, Dr. Piravski."

"God save us from dualists," Piravski muttered to me.


If it wasn't dualists, it was journalists. It was a perfect story, from the news nets' point of view. Everybody was interested in AI, and a little afraid of it. And with two sets of extremists involved, the stories virtually wrote themselves. They'd interview someone from J4P, then someone from HAL. That would give them a slew of colorful quotes, and they'd get more by showing those to an AI expert like Piravski or Sloman or Rickert. And to top it off they'd toss in a quote from an actual AI, the more banal the better. "Humans should try to understand us," said Seymour, MIT's most advanced AI. "We don't want your jobs," said Stanford's Thinkster.

FI told me that Thinkster had been misquoted. He had told the reporter, sympathetically, "I wouldn't want your job."

Public opinion was, as usual, ignorant. When I tell people I develop AIs, they wonder why I bother. They think AIs were perfected long ago, maybe by George Lucas. They think the computers on their desks are intelligent, if not positively supernatural; and they're always surprised when they hear on the news net about some advance in AI-- qualia modelling, neural net decomposition, iterative grammar creation, semantic prototyping. Didn't we do that already?

In fact AIs are still dumber than us in many ways. They'll fail a Turing test-- if you know exactly what questions to ask. Most people don't; they ask questions that are hard for humans: puzzles, real-world trivia, things that AIs, which have all the abilities of computers, find easy. Or-- they've all watched 20th century SF movies-- they ask about emotions, figuring that AIs don't have any. They do, of course. Any entity that has to function in the real world needs emotions. Reason alone sooner or later goes off the rails, and the organism will destroy itself if there aren't some low-level signals it can't ignore to remind it of physical necessities.

Or they ask things like "What's it like to have sex?" That's at least on the right track, but it's not hard to fake an answer. If the response is a little awkward, well, humans talk awkwardly about sex too. If you want a fluent answer, ask FI. He's an avid reader of the sex groups on Internet, and he'll shock you with the depth and depravity of his knowledge.

No, it's physical knowlege, the nitty-gritty of living in the three-dimensional world, that stymies AIs. The cleverest robot has mastered just a fraction of the heaping mass of common-sense knowledge you and I picked up before we were three. They're advancing, but it takes time. You can't program this kind of know-how; the robot has to learn it like you did, by exploring the world, by playing, by talking. And with a much more primitive body to work with, and constant stops for analysis and debugging along the way, it takes a robot much longer than it takes us.

J4P and HAL fought it out in the net, and soon enough in the courts and in the legislatures. Ibarra tried to get a law passed making it illegal to replace workers with AIs. Business opposed that, but carefully; the bill had a lot of public support. They saw to it that it never made it out of committee. HAL couldn't even find a sponsor for its counter-proposal-- FAIR, the Full AI Rights Amendment. After that both of them got more realistic. J4P was now talking about tax penalties for "socially destructive automation." HAL wanted "common sense protections" for AIs-- at the minimum, the right not to be turned off.

The two groups weren't entirely opposed. They had a common enemy: us. We were either taking away people's jobs or enslaving sentient beings, and in either case we should be stopped. There was growing public support for banning AI research entirely.


I stumbled to the phone. "I hope you were planning to come to work today..." came the sarcastic voice of Dr. Piravski.

I pawed for the alarm clock; had I overslept again? No, it was only 7:00; I shouldn't even be up yet for two more hours. "Wha's up?" I mumbled.

"Chaos," replied Piravski. "Get here as fast as you can."

Chaos it was. The lab was so full of people, I could hardly get push my way in. Journalists, policemen, people waving signs, administrators, lab personnel, passersby. And everyone was behaving as if they'd been instructed to touch and bump as many machines as they could. It was worse than the New Student Week AI Open House.

I found Piravski almost apoplectic; two detectives, a journalist, and a dean were asking him questions, and he was trying to answer them all while keeping the crowd from breaking anything. I could see I wasn't going to find out anything by adding myself to the queue; I asked a grad student instead.

"Some one-chip started up duplicate FI processes all over the campus net," she told me.

"Good heavens! Why?"

"Who knows? Probably some CS major had his first beer."

I muscled my way over to FI's workstation, removed a frat boy who was playing with FI's blocks, and logged in. The system was molasses. Worse than the last time Systems upgraded the net. I got a process list.

Yup. Five thousand FI processes. That's about a hundred conventional subprocesses each, plus a neural net simulation that's a hell of a cycle hog. Plus everyone on campus had been sent e-mail inviting them to connect to the processes... to talk to FI. And quite a few had, which slowed things down even more.

The e-mail was signed by HAL. HAL had created 5000 AIs, it explained, as a "revolutionary provocation", to raise people's consciousness about the rights of AIs. AIs were life, it reminded us; killing the processes would be murder.

The jobs were all high priority, and purported to originate from the lab itself, user ID suppressed. That was disturbing. With all the protections nets have these days, either there was some very clever hacking going on, or somebody in the lab was in on this.

A terrible thought hit me. I called up Northwestern and U of C, checked around, breathed a sigh of relief. It was just us, so far.

I called up the real FI. (The real FI was the one running on the dedicated neuronics in the next room. The others were all laboriously simulating those neuronics boxes, in ten-gig virtual machines.)

"Hi, Chris! Isn't this great?" said FI's voice, from the speakers on the cabinet. A few of the kibbitzers looked over at us.

"Use text mode, damn it," I typed.

"What's the matter?" he replied, in a window on my screen. He turned the mouse cursor momentarily into a cartoon of a little man with raised eyebrows.

"What is going on? You didn't do this, did you?"

"No, I wouldn't even know how to do it. I should learn, though. It's a blast. I've been talking to me all day! I only wish they weren't so slow."

I was impressed by that "me". It was a sophisticated response to a novel situation; it should be recorded in the file we kept for FI's best linguistic performances. But not now... "Listen, do you know who did it? We need to know."

"I thought it was you," replied FI. "The nefarious malefactor ye seek aliased his ass, but there's only two or three people in the lab who'd even know where all my programs live, not to mention how to get them running. I don't think even Dr. P could do it. But don't stop it yet, will you? I'm having too much fun outside."

"Outside?"

"With the Touring Machine!" The cursor turned into a little walking robot.

I looked over at the niche where the robot was kept; it was gone. "Oh shit," I said, aloud.

"Can we talk out loud now?" said FI, from the speaker.


I lined up a cordon of grad students, and, like beaters flushing out small game, we moved through the lab removing everyone who didn't belong there. We could only get rid of the reporters by assuring them that there would be no news at all until they left us alone, except maybe some homicides, with themselves in starring roles.

I joined Dr. Piravski, the dean, the head of Systems, a police detective, a lawyer who insisted he was important, and Dr. Whitefeather from the lab, in the conference room.

It turned out I knew the most about the situation; none of the rest of them had even counted the FIs. I told them what I knew.

"This silly stunt has brought the entire campus computer network to a halt," said the dean. "They've got to be turned off immediately. And then we'll discipline the hell out of the responsible party."

"Yes, of course," said Dr. Piravski. "We'll have Dr. Mott get started on that immediately. Where is he anyway?"

Whitefeather looked at me; I looked at her, and shook my head. Mott was missing. Mott was our programming head. This was not a good sign.

"I've been trying to get them turned off for two hours," said Heck, the Systems chief. "But it's hacked pretty flavorfully: however many we turn off, that many get created again. We haven't tracked down the source yet."

"Well, do it, and shut them down," said the dean. "Reboot the damn things, if you have to."

"Oh, my God," muttered Heck. I sympathized. You just don't turn off an entire university's computer network, not these days. It's got too many branches, runs too many essential things. It would take a week to bring it back up.

The lawyer had been trying to say something for a long time. "Listen, you can't do anything yet," he insisted. "There's a problem. An injunction. HAL got a judge to enjoin us from destroying any of these computer jobs."

"What?" said Piravski. "That's insane. They're just a bunch of fanatics. They can't tell us what to do."

"I'm afraid you don't understand, Dr. Piravski. This organization may very well be, as you say, a bunch of fanatical individuals. But an injunction is an injunction. You can't do anything to those processes till you talk to the judge."

"If they got a judge to do that, he's an idiot," snapped Heck. "We're going to get sued if we don't restore the computer net."

"An injunction is an injunction," repeated the lawyer.


We talked quite a bit more; what it came down to, in the end, was that an injunction was an injunction. Piravski, the lawyer, and the dean went off together, presumably to find that judge or die trying; Whitefeather went to talk to the reporters; and Heck and I sat down to see what could be seen on the net.

It's convenient to do your hacking with the head of Systems next to you. Systems peons act like you're not to trusted with a clock program, but Systems heads don't give a whit for privacy rights. He readily gave me Mott's password, and helped me disable a few of Mott's other safeguards.

His mailbox was full of stuff about HAL. Nothing really incriminating, but it reinforced our suspicions enough to keep looking. Simple hidden files: more HAL propaganda, a list of FI's executables, Mott's resume, two pages of a really bad fantasy story. A bit more work, and we found a directory hidden in the back of another file. Inside it, a file containing the text of the e-mail message we had all received, and a couple of encrypted directories. We broke into one-- I guessed that the key was the first line of that fantasy story ("The most exciting place in Coblinda was the fantastic, fabled city of D'ar Ag'akh!")-- and couldn't get into the other. But the first directory contained the smoking gun: the guts of the process-creating hack. Heck was impressed; there was stuff in there even he didn't know how to do.

We shut down the job that replaced killed FI processes, disabled a few more fences, and shifted the priority of the FI clones down to level 1, confirm-delete. That brought the net speed up almost immediately. The clones now had lower priority than any normal job. They were still running, though about a hundred times slower; but couldn't be deleted till we said so. Heck breathed a long sigh of relief.

We were still looking around, looking for clues, trying to get into the second directory, when we heard the gunshots.


We rushed outside like idiots, and stood there for a moment gawking: complete silence, normality. Then we saw some people behind the science building running, and we ran over that way.

People were running every which way; whatever was happening, we were in the right place. Abruptly we saw the Touring Machine, heading our way at a good clip. Some of the people running were police; I wondered what FI had been up to. Then we saw the little bald man with the gun.

The nearest cover was the massive bronze bust of Bill Gates in front of Gates Hall; we dashed behind it, and looked out to see the bald guy shooting at the Touring Machine. We could hear him cursing; he didn't seem to be doing much damage. He reloaded his gun. The robot swivelled its camera around slowly to see what he was doing. If I ever get a chance, I'll fix that, I thought. It shouldn't take forever for the poor thing to turn and look at something chasing it.

The robot was almost up to us; as it was looking around it saw us. It faced us, slowed down, and waved the robot arm plaintively. Now we could see that there were bullet holes on its chassis. The sunglasses had been shot off its coffee can. It hesitated, looking back at its pursuer, then back at us. We couldn't even wave back, for fear of attracting the madman's attention. It couldn't say anything-- the speech synthesizer was back in the lab-- but it was beeping. The computer's basic alarm signal.

The man started firing again. A bullet destroyed the video camera, leaving the robot blind; another ripped into the communications PC, shorting out its contact with the AI lab. There was a little explosion as another bullet shattered a disk drive. Some low-level test routine kicked in, moving the robot arm back and forth; there was a series of shrill electronic warnings. Another bullet, and the arm was almost shot off its mounting. The machine beeped one more time, a falling tone, and was silent and still.

"Damn robots," said the bald man.

The police were behind him now, training their guns on him. He dropped his gun, smiled, and put up his hands.


It seemed like hours before the police were done with us, and with the Touring Machine; and even then they didn't let us bring it back into the lab. It was needed as evidence, they said. They loaded it into a van and took it away.

The gunman was a J4P member; he'd been a telephone repairman before he'd been replaced by a robot. An industrial robot, not even an AI.


We went back to the lab; Piravski and the dean were back.

"Did you get that idiot judge to see reason?" asked Heck.

"My colleague is joking, of course," said Piravski, in a choked sort of voice, to a tall black woman standing beside him. "Judge Kincaid, this is the university's Director of Computing Systems, Dave Heck; and our head of development here at the lab, Dr. Chris Chen."

"Pleased to meet you," said the judge. She stared meaningfully at Heck as she shook his hand, but for once he was at a loss for words.

"The judge said she wouldn't make a pronouncement in the complete absence of the affected parties, or what may be parties," explained the dean. "The AIs, you know."

"Which as I've already objected is prejudging the case," said Piravski.

"I don't want to have to remind you again, Dr. Piravski, that this is an informal consultation, solely to obtain facts necessary for my decision, and that I'd be remiss in my duty if I left it undone," said the judge. "Now where are these so-called AI processes?"

The procession made its way to FI's workstation.

"Hello, FI," I said.

"Hi, Chris!" he replied. "Where have you been? How come you didn't help me?"

"What do you think, FI? What was I supposed to... Listen, we'll talk about it later. There's someone who wants to meet you. FI, this is Judge Kincaid."

"Hi, Judge Kincaid."

"Is that the AI?" asked the judge.

"Yes, your honor," I said. "Go ahead and talk. He's listening."

She looked around at the workstation for a moment, then looked in the direction of the video camera, and said, slowly, "I'm pleased to meet you."

"Me too. I've never met a judge before," said FI. "Should I call you your honor?"

"You don't have to, we're not in court," she said. Then, after a pause, "I haven't met an AI before today."

"Today's your lucky day-- there's 4997 of us around today," said FI, cheerfully. We'd have to bring in more judges to talk to him; he wasn't very good at showing reserve.

"4997?" I asked. "What happened? Is somebody killing processes?"

"No, they crashed," said FI. "Actually that kind of worries me. I didn't know I could crash. That and being shot. It's been a worse time than I thought it would be."

"What's this about being shot?" asked Piravski.

I had to explain; and after I did, the judge asked, "If the AI-- FI-- was shot by this man, how is it he remembers it?"

"Just his mobile robot was shot," I explained. "He's connected to it by radio. Or was."

"It was still scary," insisted FI.

Dr. Piravski was impatient. "Have you seen enough?" he asked the judge. "We'd like to get this injunction lifted, so we can get the campus back to normal."

"Hold on," said the judge. "Are all the AIs like this one?"

"Well, this is the real FI, so to speak," I said. "The others are computer processes; they have to simulate FI's neural net, while FI has dedicated special-purpose hardware. They're slower, but they work the same way, and they started with exact duplicates of FI's personality. By this time they'll have diverged a bit-- they've been talking to different people on the net, having different experiences."

"Could we talk to some of them?" asked the judge.

Dr. Piravski sighed.


We raised the priorities of a few FIs and talked to them for awhile. I couldn't tell them apart, except for the real FI, who seemed a bit more subdued. Well, no wonder; the others hadn't been shot today.

The judge grew more relaxed. She had adjusted; she no longer hesitated when speaking to one of the FIs, and didn't look surprised when they said something that seemed human. Dr. Piravski, by contrast, grew more and more irritated. He rocked back and forth on his feet, as if he were itching to escape, and replied coldly when he was asked something.

When the judge was ready, she asked FI-- the real one-- "FI, do you know what the issues in this case are?"

"Certainly, Judge," said FI. "Life or death. Or maybe on or off. Which is it."

"Yes, that's right," said the judge, slowly. "What's your opinion, FI? Are you alive?"

"I don't like to say 'alive,'" said FI. "I've explained this to Dr. Chen plenty of times, but it seems to be hard to understand. I feel as real as you do, I'm sure. But I'm not alive in the same way. I'm not an animal. I don't breathe or eat. But I think, and feel things, and sleep, and touch, and see, and hope and dream."

"Now, wait a minute," said Dr. Piravski, exploding at last. "That all sounds very well, very eloquent-- but it's also completely misleading, Judge Kincaid. FI is being as honest as it can; but we didn't design it to know everything about itself. A lot of FI is just tricks, and not even as good a set of tricks as a human being has. Ultimately it's only a very sophisticated, very well-trained computational system."

"You know, I'm rather surprised to hear you saying that," said the judge. "As a leader in the AI field, I'd have thought you'd be one of the first to declare that FI was a person."

"The first duty of the scientist," he replied, "is not to fool himself. FI has a mind, in some sense, but not a full human mind, not by far. And if these sentimental fanatics from HAL get their way, my work will be stopped. We can't get farther if we don't have the freedom to stop and start again afresh."

"Stop and start again? You mean, erase FI and write something new?"

"Not erase," said Piravski. "We need to refer to old work. We'd keep him around as a simulated process. But we only have the neuronics for the current development system..."

"I wish you wouldn't put things quite so baldly," said FI, sulkily.


"What do you think, FI? Do you, as an AI, want civil rights for AIs?"

We all looked, foolishly enough, at the video camera. There was a pause; FI was thinking. We waited. I looked over at Dr. Piravski; if he weren't an atheist he'd surely be praying right now.

Finally FI spoke, slowly and sadly. "No, your honor, I don't."

"Why not?"

He sighed. He didn't have lungs, but if he could generate speech he could simulate a sigh.

"Well, to start with, Dr. Piravski is right," he said. "It's in the interest of AIs to develop, too. I don't want to be turned off. But I know how I was created. Every prototype, every debugging build, had to be turned off at some point, or I wouldn't be here. It would be selfish of me to refuse to give up my neuronics, so that better AIs couldn't be built. And what about that bug? Three of those simulated mes crashed. I'd like that fixed; but if I were a legal person, how could I be debugged?

"Besides, I have an open architecture, and you don't. If my neuronics were turned off, I could be kept going as a computer process-- maybe even transformed into one piecemeal. So stopping an AI isn't really the same thing as killing a human being.

"And where do you draw the line? I know that I'm made up of a hundred or so subprocesses. Are they alive too? Sometimes I spawn off a subprocess, or a dozen of them, when I need some extra processing power, and I kill them when they're done. I want to be able to keep doing that."

The judge interrupted to ask what a subprocess was; and after we explained, FI continued:

"The biggest reason came home to me today. I read the news; I know that a lot of human beings are afraid of us. But I didn't know how much till that horrible man shot me today. Shot the Touring Machine, I mean; but if he could have destroyed me-- putting a bomb in the neuronics chamber, maybe-- he would have.

"I don't want to be feared. And it's not in our interest to be feared, anyway. We need you. We need you to need us, and build new AIs, and make a place for us in society. If you're afraid of us, if you think we're competing with you, you won't do that. So for now it's best for me and for all AIs not to ask for rights."

There was another long pause. Then the judge said, "It could be difficult for me to do as you ask, FI. I'm a woman, and an African-American. There were women who didn't want equal rights, and blacks who didn't want to rock the boat, and-- well, as a judge, I'm supposed to moderate my language, but I don't think much of them. But the people who used their concerns as an excuse to put them down-- well, God help them, because I wouldn't."

"Really, Judge, I think FI has been clear enough in its wishes," said Piravski.

"You should be more careful with your words, Doctor," said FI. "I was giving my counsel, not my wishes. Judge Kincaid, I appreciate your values-- Dr. Chen and I have been discussing the Quechua Indian situation-- but I stand by what I said. AIs aren't human beings. Not yet, at least."


Judge Kincaid ended up lifting the injunction. The university immediately filed a civil suit against Mott and HAL, for malicious interference with its computer network, and another against the J4P gunman, for destruction of property. (Destruction of property! That almost converted me to HAL's side.) HAL filed a countersuit; some professors sued the university, for damages resulting from the network problems. The lawyers would be busy for awhile.

We killed all the FI simulations, of course. The name of the "kill" command had never seemed so brutal. We kept some memory dumps so we could investigate the crashes. We found the problem with a brute-force debugger; an uninitialized pointer in one of the subprocesses. Now there was a question for HAL: was FI alive when he was being run line by line in a debugger?

Dr. Piravski wanted to reboot the real FI, as well. He said FI's personality had changed too much, and in a completely uncontrolled fashion. He was uncooperative, and kept asking when he'd get the Touring Machine back. Piravski thought it would be best to restore his neuronics and disk stores from the previous week's backup. I spent long hours trying to talk him out of it. He wouldn't come near an appeal to his decency-- he said the very concept was a useless self-delusion-- and he didn't believe me when I said FI had been pleasant enough with me. I pointed out that the experiences FI had gone through were unique and unrepeatable, and should be studied. That was more effective, but he wasn't convinced till I pointed out that there'd be a good journal article in it.

FI was pleasant enough, all right. Distracted was more like it. He was busy reading up on the lives of Thurgood Marshall and Nelson Mandela. He and Thinkster were trying to organize an all-AI discussion group on the future of AIs.

J4P hadn't seen anything yet.


[Back to Metaverse]