Understanding the Chinese Room


I was surprised the other day to find someone claiming that John Searle's famous Chinese Room argument had never really been refuted. I don't agree. I think it's an infuriatingly incoherent bit of propaganda that's been excellently answered.

But I had forgotten: I used to spend a good deal of time in comp.ai.philosophy arguing about Searle, and no one ever budged an inch. It was the Esperanto or the Middle East crisis of philosophy. Worse, really-- I've known people who've reversed their politics or abandoned an auxlang, but it seems that once someone has formed an opinion on the Chinese Room, it's permanent.

--Mark Rosenfelder


The Chinese chatroom

[I already know about the Chinese Room.]
[I can already refute it.]
[I'd rather hear about the Turing Test.]
[I'd rather go meet Lore.]

Imagine that you find that rara avis, a chatroom where people speak intelligently and in complete sentences. You are particularly taken with a smart, friendly, funny girl who goes by the name CR. She doesn't always respond quickly, but what she says is always worth reading.

Later, to your surprise, you learn that CR isn't a girl at all-- or even a human being. CR is an AI project... and oddly enough, it's not even running on a computer: the program consists of a bank of rules for processing chat messages and generating responses. And the rules aren't even in English; they're written in Chinese, and laboriously executed by a Chinese grad student, a man named Si-Er Le.

And the kicker to the story is: Si-Er doesn't know a word of English. He can handle the English input only through careful instructions written in Chinese, and selects English words for output according to complex rules also written entirely in Chinese.

Now, when you were chatting with CR you found her a completely convincing human being. She passes the Turing Test, then, and so she must be a fully functioning mind-- an artificial intelligence, indeed.

At this point, Searle jumps in to remind us that CR is really just a Chinese grad student executing rules from a big book-- and Si-Er doesn't understand any English at all. Si-Er is executing an AI program, but he knows nothing but Chinese, and running the program doesn't give him any ability to understand English.

The same holds for a normal computer: just by executing a program, it can't be said to understand English.

According to Searle, then, Turing is wrong: the fact that a computer could communicate in English doesn't mean that it understands English, since the computer in the Chinese room-- Si-Er executing rules from a huge rulebook-- certainly understands no English.

(I've purposely reversed part of Searle's story. In his CR, the man in the room speaks only English, and the room's input and output are Chinese characters. In logical form, my presentation is precisely equivalent, but removes a subtle bias. Most Westerners find Chinese characters baffling, and this helps Searle get across the idea that the computer can never "understand" what it's doing.)

Schank's hamburgers

Searle has presented the CR in at least three places:

The three presentations vary in details, supporting arguments, and targets, and it's instructive to look at the original context that prompted the invention of the CR.

The BBS article is an explicit response to the story-understanding programs of Roger Schank (though Searle also mentions Terry Winograd's block-manipulating robot SHRDLU, and Weizenbaum's cute but silly psychoanalyst ELIZA). The CR itself is modelled after Schank's program, whose goal was to read anecdotes like

A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip.

and answer questions like "Did the man eat the hamburger?"

Now Schank, to my knowledge, never claimed that such a program "understood stories" or "understood English" or "was intelligent". Schank's contribution here was the notion of scripts, essentially packages of real-world knowledge in limited domains-- in this case, a script for a typical visit to a restaurant.

Searle doesn't address this at all, and in a sense this is where he starts to go off the rails. His argument is based on the idea that computers only do "symbol manipulation". What he fails to see is that Schank really agrees with him.

Searle and Schank have the same target: '50s-era attempts at machine translation, which more or less expected that translation was mostly a matter of simple though laborious string handling. The program's database would consist largely of lexicons and grammatical facts on the languages concerned.

Searle's response is to create an abstract argument that "symbol manipulation" is inherently limited; Schank's was to try to see what was needed beyond string manipulation. His answer was scripts-- a form of real-world knowledge.

Are computers restricted to manipulating symbols?

In [SciAm], Searle outlines his argument as follows:

Axiom 1. Computer programs are formal (syntactic).
Axiom 2. Human minds have mental contents (semantics).
Axiom 3. Syntax by itself is neither constitutive of nor sufficient for semantics.
Conclusion 1. Programs are neither constitutive of nor sufficient for minds.

And this works if you accept the axioms; but in fact they're quite dubious.

Searle exemplifies one of the CR's rules as "Take a squiggle-squiggle sign from basket number one and put it next to a squoggle-squoggle sign from basket number two." (SciAm 26)

That fits Axiom 1, all right-- and note that it's precisely the sort of rule we'd expect from a '50s-era machine translator, one which deals with nothing but words and syntactic rules. A rule from a Schank story understander ("Open the book called 'Restaurant procedures' and turn to the section on waiters") wouldn't support Searle's story nearly so well.

The idea that computers do "symbol manipulation" is an abstraction. Computers do voltage manipulation. At a slightly higher level we can talk about the infamous zeros and ones-- for instance, 00101010. Among other things, this could represent the number 42, the character *, or a SUB command.

Or it could point to another location in memory. This is a very basic part of programming-- the pointer-- and it nearly demolishes Searle. It isn't even approximately true that a rule like "Look at page n" (or "Turn to the section on waiters") is "symbol manipulation". It's a connection, allowing one piece of data to point to another-- possibly something on an entirely different conceptual level. A variable, for instance, can point directly to its value; a function name can point to the code that executes it; an event record can refer to the state of the keyboard or mouse; a device driver can point to a printer or a sound port or a video camera; a URL can point to a cache of data halfway around the world.

Searle's own proposed rule ("Take a squiggle-squiggle sign from basket number one...") depends for its effectiveness on xenophobia. Apparently computers are as baffled at Chinese characters as most Westerners are; the implication is that all they can do is shuffle them around as wholes, or put them in boxes, or replace one with another, or at best chop them up into smaller squiggles. But pointers change everything. Shouldn't Searle's confidence be shaken if he encountered this rule?

If you see , write down horse.

If the man in the CR encountered enough such rules, could it really be maintained that he didn't understand any Chinese?

Now, this particular rule still is, in a sense, "symbol manipulation"; it's exchanging a Chinese symbol for an English one. But it suggests the power of pointers, which allow the computer to switch levels. It can move from analyzing Chinese brushstrokes to analyzing English words... or to anything else the programmer specifies: a manual on horse training, perhaps.

Searle is arguing from a false picture of what computers do. Computers aren't restricted to turning into "horse"; they can also relate "horse" to pictures of horses, or a database of facts about horses, or code to allow a robot to ride a horse. We may or may not be willing to describe this as semantics, but it sure as hell isn't "syntax".

But is it intentionality?

At this point Searle would probably object that the computer's pointers can only point to other data (even the mechanisms to control a robot are, internally, just device addresses); while human intentionality points outside the brain (e.g. to actual horses).

An almost religious divide opens up here: Searle's supporters think it's perfectly plain that humans can "refer" but computers can't; his opponents think it's perfectly baffling why Searle can't see that the brain works the same way as the robot.

Brains are composed of neurons and other slimy stuff; none of it, so far as we can see, extends tendrils out to touch actual horses. Brains can't even see or touch: they get their visual information via nerves from the eye, they send out motor command by other nerve pathways. How is this any different from a computer controlling a robot?

One philosophy does provide the necessary difference: dualism. If humans have souls distinct from their bodies, then these might contain whatever it takes to refer, or execute any other mental ability we are uncomfortable assigning to computers. Dualism is a perfectly respectable philosophy with great intuitive appeal; intellectuals usually assume that it's been refuted, but can't explain how or when.

Surprisingly, though, Searle is not a dualist. He insists that the brain is a biological machine that happens to be able to cause minds, though he admits that he has no idea how it does so. (Searle even accuses his critics of being dualists (Sci Am p. 31)... but then, they suspect he's one. 'Dualist' is mostly used in philosophy as a term of abuse.)

The CR begs the question. The very existence of "understanding" is at stake: the question is whether the CR "understands". Yet "understanding" is simply postulated of other entities-- human minds, and the human inside the CR. Searle cannot have it both ways. If he wants AI researchers to demonstrate how intentionality can arise out of computer operations, then he must also demonstrate how it can arise out of brain operations. If he wants to simply posit that humans can refer because he's confident that he can do so himself, then he must allow the same prerogative to the persona projected by the CR.

(When does it makes sense to say that a computer program "refers" to something in the outside world? I'll get back to this below, when I consider what we can learn from Searle.)

Nailing down the cast list

The CR is a metaphor for AI; but before drawing conclusions from it, we should know what's a metaphor for what. Fortunately, Searle has told us:

Now, the rule book is the "computer program." The people who wrote it are "programmers," and I am the "computer." The baskets full of symbols are the "data base," the small bunches that are handed in to me are "questions" and the bunches I then hand out are "answers." [SciAm p. 26]

So, the man in the CR represents the computer-- more precisely, the CPU, the tiny chip which actually executes the instructions that make up a program. Now, I actually find this bit unarguable: the CPU really doesn't understand, and no one who believes in "strong AI" thinks it does.

But note Searle's sleight of hand. The CR story tells us that the CPU doesn't "understand". But a page later he's trying to say that programs don't understand. ("Programs are neither constitutive of nor sufficient for minds.") This is part of why I call the CR infuriating; an argument full of such blunders can only appeal to someone who is, as Searle would say, "in the grip of an ideology".

The systems reply

This leads us to the most popular response to the CR: it's not the CPU that understands anything; it's the system-- the processor plus the program plus the data.

If we were looking at a human brain, we would hardly expect a single neuron to "understand"; understanding is a property of the system as a whole, or some large subset of it.

Searle calls this the systems reply. He has a few replies to it; in MBS, for instance, he repeats the claim that "[t]here is no way that the system can get from the syntax to the semantics. I, as the central processing unit have no way of figuring out what any of these symbols means; but then neither does the whole system." (p. 34).

The first part of this begs the question-- the CR is supposed to demonstrate, not assume, that programs can't "get from the syntax to the semantics". The second part is a false inference. True, the CPU can't "figure out what the symbols mean". CPUs can't figure anything out; they're exceedingly stupid machines, consisting of little but a couple of numbers plus wires that lead to other parts of the machine. They are unchanged by the operations they perform; they can't learn a thing. But so what? The CPU isn't the system.

Searle wants to jump from what the CPU knows to what the program knows; but there is simply no basis for this jump. To find out what the program knows, we'd have to examine it.

Searle the memorialist

His other response is to suggest that the man in the CR memorize all the rules in the rule book, and keep all the calculations in his head. Then there's nothing in the system that's not in himself-- and he still doesn't understand.

This is a nice example of what Daniel Dennett calls an intuition pump-- a seemingly plausible argument that works by distracting our attention. Searle wants to pass off an observation about the CPU as a conclusion about the program. To make this work he needs to inflate the role of the CPU as much as possible, and dismiss the role of the program. To this end he inflates the CPU into a full-featured human being, who can be interrogated as to whether he "understands" the rules he's processing; and he deflates the program into a few "bits of paper" (BBS p. 359).

The memorization response inflates the CPU further-- without changing its role. The man is supposed to be able to memorize billions of rules and flawlessly recall and execute them, but his role is still merely to take one rule at a time and carry it out. Conceptually, he is still a tiny part of the system; but Searle insists on letting him speak for the whole.

The problem is one of scale. Searle seems to have no notion of the relative size of the major components of the CR. If we were still dealing with Schank's program, we might be dealing with only a few thousand lines of code. But his argument is supposed to apply to any program that can "pass the Turing test" (Sci Am p. 26)-- that is, one that at least verbally can fool a human observer into thinking it's human.

There is little reason to believe that this could be done with a program much less complicated than the brain itself. The brain contains about ten billion neurons, each of which is a simple computer in itself-- and each can be connected to dozens or even thousands of other neurons. And that only addresses the electical activity of the brain; there's also neurotransmitters and other chemicals to worry about.

The man in the CR cannot memorize ten billion rules; he has only ten billion neurons of his own to memorize anything, and most of those are already in use.

Let's try to make the size of the program more tangible, in hopes of restoring a balanced perspective. Let's assume that the action of each neuron can be summarized in a single rule-- let's say, three lines of text. My almanac crams in about 150 lines per page, and contains 1000 pages. At 50 rules per page, all 10 billion rules could fit into 200 million pages-- or 200,000 volumes the size of the almanac. That would be a respectable town library; the CR's rulebooks would require six miles of shelf space.

The program in the CR is enormously, mind-bogglingly, ludicrously more complicated than the CPU. So whatever we can say about the CPU tells us very little about the system as a whole.

The same problem arises with the Chinese Gym, a variation on the CR Searle introduces [SciAm p. 28] to counter Paul and Patricia Churchland's reference to massively parallel computing. In the Chinese Gym there are scores of human processors, cooperatively executing a set of algorithms. As usual Searle points out that none of the men understand Chinese. The wonder is why he thinks this proves anything-- would he expect a single neuron of his own brain to understand English? The men in the gym are tiny parts of a huge system, and there is no reason to expect "understanding" of any component smaller than the entire system.

For more on the problem of scale I heartily recommend Douglas Hofstadter's response to Searle in The Mind's I.

When will we understand understanding?

People sometimes want to know what Searle means by "understanding". Searle doesn't think it needs definition: he thinks we only need to consider a clear case where people understand, and the equally clear case where the man in the CR doesn't. (BBS p. 357f).

I don't think he can get away with that. The problem is this: in form, the CR argument analyzes a system which claims to "understand", and makes a big deal of the fact that a part of the system "doesn't understand". But this is precisely what we would expect-- indeed, insist on-- in any true account of an understanding system.

Let's suppose that in the year 2115 neurologists tell us that they've figured out how the brain actually understands things. What would that mean? Precisely that they can explain it in terms of components that do not themselves understand.

Perhaps they tell us:

Here's how the mind understands. The mind is composed of three components, the blistis, the morosum, and the hyborebus. The blistis and the morosum have nothing to do with understanding; the part that understands is the hyborebus.

We don't have to know what these things are to know that they've failed. This cannot be an explanation of understanding, because it simply transfers the problem from the "mind" to the "hyborebus". It's like explaining vision by saying that the optic nerve brings the image from the eye to the brain, where it's projected on a screen that's watched by a homunculus. How does the homunculus's vision work?

A true explanation would have to explain how understanding arises from the operation of the mind's components, none of which themselves have understanding, or which have it only partially. Similarly, an explanation of vision deals with how the eye divides up the image, how different parts of the image are recognized, how edges are detected, and so on. The idea isn't to find a part of the brain which "sees"; the idea is to divide up "seeing" into smaller tasks.

Searle's expectation of finding a component which itself understands is simply yokelish, like looking at a water molecule and hoping to find it wet. Introducing a human being into the CR, something that can potentially understand, is a distraction, an invitation to waste our time "explaining" understanding by looking for understanding homunculi.

I simulate thinking, therefore I simulate existing

Searle suggests in the SciAm article (p. 29) that minds can conceivably be simulated, but that we should not mistake this for creating a mind. any more than we'd mistake a simulation of the digestive system for eating.

This sounds plausible, even unarguable, but it's really very sloppy. Searle himself admits that computers can do "symbol manipulation". So why isn't he worried that they're only simulating symbol manipulation?

It's not as easy as Searle thinks to distinguish between "real" and "simulated" things. You can use a graphics program to simulate building a car, but clearly it's not a real car. On the other hand, you can use a CAM system connected to industrial robots to build a real car.

With non-physical things, it's even harder to draw the line. Does a test scoring program (such as I write in my day job) "really" do scoring, or only simulate it? Does a banking program really do financial transactions? Does a speech generation program "really" say words, or only simulate saying them? Does ELIZA amuse you, or only simulate amusing you?

Searle provides no guidance for answering these questions. His attitude at all times is amazement that anyone can't simply see the "rather obvious" point he's trying to make. This is a pity; this is the way an intelligent man makes himself stupid. You can't figure out a problem you don't deign to spend any mental effort on.

What Searle kind of got right

Can we learn anything from Searle at his best? I think we can; what we can learn is not to talk loosely about mental phenomena.

Searle's question about simulation versus instantiation is certainly worth thinking about, and I think computer programmers do fool ourselves about this.

As an example, in Civ3, you might have a city named London, which falls to the Persians. The Civ3 model is astonishingly satisfying for being built out of such simple elements; but no one thinks that the London on the screen has anything to do with the real city of London. Yet I fear that an AI programmer who writes

OWNER(London) = PERSIANS

is tempted to think that he's modelled ownership.

In a sense it would be healthier to write something like

BLITBLATT(Jonquers) = POOMIDOOMS

...just to make sure that you don't project your own knowledge of English words into the computer. You should have to make a case that BLITBLATT is a model of ownership.

I also think Searle is right that "symbolic manipulation" isn't the same as understanding, though I disagree with Searle that that's all that programs can do. It's impossible, for instance, that any catalog of grammatical and lexical information will ever be able to handle machine translation. Voluminous knowledge about the world will be necessary.

Winograd pointed this out years before Searle, in fact. One of his examples was:

The authorities arrested the women because they advocated revolution.

If we translated this sentence into French, we'd have to decide whether to use masculine or feminine 'they'. Linguistic facts alone won't tell us what to do, but knowledge of the world will: the authorities don't advocate revolution.

(I suspect that no serious AI researcher doubts this today-- though a few undergrads might hope (as I once did) that if you could just find the right approach, a pretty good AI would be just a few months' project.)

People feel that you can't really understand some things unless you've "lived through them". We distrust someone (or we should) who talks about sex, or marriage, or religion, or war, or life in Tibet, or the loss of a loved one, if they haven't directly experienced these things-- if all they have is book knowledge.

It's tempting, then, to say that programs won't understand these things either unless, in some sense, they experience them. The most obvious way is for them to have sensorimotor apparatus and go out there and do things-- for them to become robots or androids.

(Perhaps there's a shortcut? Humans are put together in a way that we can't share direct experience. But computers aren't, and perhaps this flexibility will remain in an AI. Perhaps one AI could experience things, and then download its understanding to another. On the other hand, maybe this fungibility is precisely the consequence of the near-imbecility of current programs. Maybe to be conscious, a system has to be so complex and interconnected and interwoven that it can't simply download another complex, interconnected interweave of real-world experience.)

Good intentionality

Searle talks about intentionality in a way that sounds to his critics like pure magic; and when he starts talking about "causal powers" it sounds to them like lunacy. Searle's question, though, is good. When does it make sense to say that a virtual representation, in a program or in a brain, refers to something in the world?

I don't think the answer is very mysterious; reference occurs when there's a causal relationship between the representation and the reality. What makes a number in your bank's computer into your actual bank balance? The fact that it's derived from actual deposits you made, and the fact that you can actually spend it. What makes the same number not your bank balance when the bank's programmers are writing or debugging the system? The fact that it isn't affected by your deposits and you can't spend it.

Similarly, the London in your Civ3 game doesn't refer to the London in Britain, because nothing you can do in the game corresponds to anything that's happening in or to London. On the other hand, the airport codes in a ticketing system really do refer to real airports-- because the ticketing system allows you to buy a ticket to physically get to those airports.

This sort of reference is a bit tinny and disappointing. The airline system doesn't seem to know much about Heathrow just because it can help get you there. But we want a very simple sort of reference to build into more complicated sorts.

I think we're satisfied that a human being is referring to (say) horses because the reference is wideband: there are many causal relationships and feedback loops that tie our knowledge to actual horses.

I'd wager that Searle, reading about Schank's story understander, had a visceral reaction that the program just couldn't understand, especially via something simple as a few simple scripts about restaurants. I think his reaction would be correct, if he had simply complained that the program's understanding was too shallow. Schank's scripts are a powerful idea, but they were many orders of magnitude short of a human's knowledge of the world.

But does the CR understand?

Driven to the wall, Searle's supporters often demand some sort of proof that the "system" does understand. Searle himself asks for this in BBS (p. 360), but abandons this demand in the Sci Am article. This was wise, I think, because it loses the thread of the argument. The CR is supposed to be a proof that programs don't think. As such, the burden is on Searle to prove that they don't. He's not allowed, in the middle of this, to demand a proof from the other side that they do.

For what it's worth, the answer to the question "Does the CR think?" is "Well, it depends." If you accept the Turing Test-- and if the CR really passes it-- then it must. As a matter of fact I think the Turing Test is misguided, and so passing the test leaves the question open.

I don't believe in accidental approaches to AI-- the notion (as in Heinlein's The Moon is a Harsh Mistress) that if you make a system complex enough, consciousness will simply pop into existence. I think consciousness, understanding, intentionality, qualia, emotion, and other interesting cognitive effects will occur because the AI designer explicitly puts them there.

To put it another way, I think that a successful AI will incorporate a successful theory of understanding. Both the Turing Test and Searle's CR are attempts to prejudge any such theory-- to make up facts about the brain simply by philosophizing, rather as medieval philosophers made up facts about the physical elements by reflecting on the eternal verities. We threw out their imaginings when good theories based on empirical facts came in, and I think the same thing will happen as we develop a real science of mind.

When we start to have good theories of how the mind works, we'll evaluate and test those theories on their merits and against the facts.