Phil 101: Searle’s Chinese Room

Intentional States

We reviewed the notion of an intentional state (see these notes from earlier in the term).

We discussed what it means to have states that non-derivatively have content or are about things — where symbols in a newspaper exemplify having content only derivatively (because its authors and readers associate those symbols with meanings). Humans and probably some animals too, on the other hand, have states that are non-derivatively about things. True, we’ll say things like “The newspaper says that Nixon resigned.” But the newspaper doesn’t itself really talk or have opinions. (I mean the printed newspaper, not the company that publishes it.) It gets its intentional properties from its authors and readers; from their beliefs and intentions and expectations. The newspaper only has what we can call borrowed or derived intentionality. Similarly, we sometimes ascribe intentional states to computers and other man-made devices. For instance, you might say that your chess-playing computer wants to castle king-side. But your chess-playing computer probably doesn’t have any non-derivative, unborrowed intentionality. It is too simple a device. Such intentionality as it has it gets from the beliefs and intentions of its programmers. (Or perhaps we’re just “reading” intentionality into the program, in the way we read emotions into baby dolls and stuffed animals.)

The question whether a being has states that are non-derivatively about things is different from the question of whether the being freely chose to have those states. Even if we manipulated Canon into having certain beliefs, or genetically engineered him so as to ensure that he has those beliefs, still if Canon ends up with an ordinary human brain (just one with an unusual history), surely he’ll have beliefs and other attitudes with their own content. We may have “put those thoughts into his head”; but the sense in which we did that is different from the sense in which we put the meanings into the symbols in the newspaper. Canon really does have his own thoughts, with real content, in a way that the newspaper doesn’t. This is separate from the question of how much choice Canon had, or how much free will he exercised, in acquiring those thoughts.

Indeed, it’s not even clear how much free will ordinary humans exercise in acquiring the thoughts they have. Some philosophers think we don’t have any free will at all; when they say this, they don’t think they’re saying we lack beliefs, or that our minds have no more contentful thoughts in them than newspapers do.

Unfortunately, there’s a way of formulating the question whether a being has states with non-derivative content that can obscure all this. Philosophers sometimes phrase this as the question whether the being has original intentionality. When it’s put that way, you may look at the manipulated humans and think, obviously their thoughts aren’t “original,” someone else put them there. Now consider a sophisticated AI that humans programmed, much more flexible and seemingly “intelligent” than your chess-playing computer. You may want to say in this case too: Obviously someone else chose how to program the AI. (Or at least, chose how to program the programs that controlled how the AI evolved.) So its thoughts aren’t “original,” either.

But these responses would be too fast. The question is not supposed to be about how much free choice these beings had in choosing their thoughts, or how much of a role other people had in choosing them. It’s supposed instead to be a question of whether they have more contentful thoughts than newspapers do. Surely humans do, even if they were manipulated into having those thoughts. And perhaps sophisticated AIs will too. Or perhaps they won’t. But the mere fact that humans played some role in programming them doesn’t straightforwardly prove that they don’t.

So that’s one thing I emphasized in our discussion: even if AIs are programmed by humans, that doesn’t yet answer the question whether they’re more like manipulated people, or whether they’re more like newspapers.

Instead of “original intentionality,” another label that’s sometimes used here is “intrinsic intentionality.” This label can also be misleading, but in ways we haven’t discussed.

Searle’s Chinese Room

The functionalist (what Searle calls “the advocate of strong AI”) believes that if we have a computer running a sophisticated enough program, a program that’s similar enough to the ones our brains are running, then the computer will have its own, “original” intentional states. This is the view Searle wants to argue against.

Functionalism says any way of implementing the right program gives you everything that’s needed for there to be a mind, real beliefs, understanding, intelligence, and so on. Well, here’s one way that the program could be implemented:


sam brown, explodingdog

Jack does not understand any Chinese. However, he inhabits a room which contains a book with detailed instructions about how to manipulate Chinese symbols. He does not know what the symbols mean, but he can distinguish them by their shape. If you pass a series of Chinese symbols into the room, Jack will manipulate them according to the instructions in the book, writing down some notes on a scratchpad, and eventually will pass back a different set of Chinese symbols. This results in what appears to be an intelligible conversation in Chinese. (In fact, we can suppose that “the room” containing Jack and the book of instructions passes a Turing Test for understanding Chinese.)

According to Searle, Jack does not understand Chinese, even though he is manipulating symbols according to the rules in the book. So manipulating symbols according to those rules is not enough, by itself, to enable one to understand Chinese. It would not be enough, by itself, to enable any and every system implementing those rules to understand Chinese. Some extra ingredient would be needed. And there’s nothing special about the mental state of “understanding” here. Searle would say that implementing the Chinese room software does not, by itself, suffice to give a system any intentional states: no genuine beliefs, or preferences/desires, or intentions, or hopes or fears, or anything. It does not matter how detailed and sophisticated that software is. Searle writes:

Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man’s ability to understand Chinese. (1980, p. 368)

Searle denies that the Chinese room has any of its own thoughts or intentional states (other than Jack’s intentional states). Some philosophers are willing to grant that machines with the kind of hardware Searle is describing may have some intentional states. They doubt though whether those machines have any “qualitative” or “phenomenal” states like pain or perceptual experience. As it’s sometimes put, they doubt whether there is “anything it’s like” to be one of these machines.

Does the Whole System understand Chinese?

We asked how the functionalist might respond to Searle’s Chinese Room argument. One functionalist response says that even though the person inside the room may not have the relevant mental states (understanding Chinese, having beliefs about the Han dynasty, liking the taste of shrimp), the whole system does have these states.

According to this response, Jack does not himself implement the Chinese room software. He is only part of the machinery. The system as a whole — which includes Jack, the book of instructions, Jack’s scratchpad, and so on — is what implements the Chinese room software. The functionalist is only committed to saying that this system as a whole understands Chinese. It is compatible with this that Jack does not understand Chinese.

One objection Searle makes to these arguments says: let’s suppose the guy in the room memorizes the whole book and scratchpad and does everything in his head. He doesn’t then have to have any kind of “Aha! Now I understand Chinese” experience. From his perspective, Chinese can seem as opaque as it always did. At the same time, he would still be running the program; and now he would be the whole system. So how can the system understand Chinese, if he is the whole system, but he still doesn’t understand Chinese?

Here is Searle presenting this objection:

My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn’t anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him. If he doesn’t understand, then there is no way that the system could understand because the system is just a part of him. (1980, p. 359)

Towards the end of class, we discussed ways the functionalist could respond to Searle’s argument.

Who Knows What?

One issue that sometimes comes up in class discussion of these issues is what people in the scenario Searle is describing would be able to know. I think we can bracket those issues, and settle what to think about this debate without needing to settle them. Here is what I hope will be a helpful way to explain this:

Say I ask you to consider a hypothetical story, where Beatrice dislikes Allen but doesn’t have any plan to hurt him. But one night they’re arguing while walking across a bridge, and he grabs for her cell phone, but she pushes him away hard. He stumbles backward, and then falls off the bridge and dies. Beatrice isn’t sorry that he’s dead, but she didn’t mean for this to happen, and she’s afraid she’ll get in trouble. Luckily no one else was around to see what happened, and Allen’s body washes away in the river. Now I want us to discuss whether what happened counts as Beatrice having murdered Allen. We start to discuss that question. One of us argues that it’s not murder; the other argues that it is murder but that Beatrice shouldn’t go to jail for it. What if I now object to your arguments and proposals: “How do you know what happened? You weren’t there, and there’s no evidence left that Beatrice did it.”

Wouldn’t that objection seem off-base? I just told you the story about what happened. Sure, no one inside the story may know what happened. (Maybe even Beatrice doesn’t know anymore, because she later lost her memory.) But we outsiders can think about the story as stipulated, and decide whether if that’s what happened, it counts as murder. (It could be that the story isn’t complete enough to give a definite answer. Maybe it depends on whether Beatrice did this or thought that at such-and-such a point in the story, and that’s something I left unsettled. Then we’d have to split the story and talk about what’s true in either case.)

One kind of question can be asked about What people inside the story would be able to know. A different kind of question can be asked about If the story is as described (and it doesn’t covertly contradict itself), what would be true: was Allen murdered or not? As I described things, I was inviting you to engage with the second kind of question, not the first. That’s why my complaint about there being no evidence left is off-base. In the same way, Searle is inviting us to engage with the second kind of question about his thought-experiment.

Responding to Searle’s Objection

There are several problems with Searle’s objection to the proposal that the whole System understands Chinese, even if Jack doesn’t.

In the first place, Searle’s claim “he understands nothing of the Chinese, and a fortiori neither does the system, because there isn’t anything in the system that isn’t in him” is a dubious form of inference. This is not a valid inference:

He doesn’t weigh five pounds, and a fortiori neither does his heart, because there isn’t anything in his heart that isn’t in him.

Nor is this:

Jack wasn’t designed by Chinese Rooms by Google™, and a fortiori neither was his Chinese room system, because there isn’t anything in the system that isn’t in him.

So why should the inference work any better when we’re talking about whether the system understands Chinese?

A second, and related, problem is Searle’s focus on the spatial location of the Chinese room system. This misdirects attention from the important facts about the relationship between Jack and the Chinese room system. Let me explain.

Emulators

I invited you to think about programs like an Android emulator that runs on a Mac:

Some computing systems run software that enables them to emulate other operating systems, and software written for those other operating systems. For instance, you can get software that lets your Macintosh emulate an Android. Suppose you do this. Now consider two groups of software running on your Macintosh: (i) the combination of the Macintosh OS and all the programs it’s currently running (including the emulator program), and (ii) the combination of the Android OS and the activities of some program it’s currently running. We can note some important facts about the relationship between these two pieces of software:

  1. The Android software is in some sense included or incorporated in that Mac software. It is causally subordinate or dependent on the Mac software. The activities of the Mac software bring it about that the Android software gets implemented.

  2. Nonetheless, the “incorporated” software can be in certain states without the “outer” software thereby being in those states, too. For example: the Android software may crash and become unresponsive, while the Mac software (including the emulator) keeps running. It’s just that the emulator’s window would display a crashed Android program. Another example: the Android software might be treating YouTube as the frontmost, active program; but — if you don’t have the emulator software active in your Mac — the Mac software could be treating Chrome as its frontmost, active program.

It’s this notion of one piece of software incorporating another piece of software which is important in thinking about the relation between Jack and the Chinese room software. According to the functionalist, when Jack memorizes all the instructions in the Chinese book, he becomes like the Mac software, and the Chinese room software becomes like the emulated Android software. Jack fully incorporates the Chinese room software. That does not mean that Jack shares all the states of the Chinese room software, nor that it shares all of his states. If the Chinese room software crashes, Jack may keep going fine. If the Chinese room software is in a state of believing that China was at its cultural peak during the Han dynasty, that does not mean that Jack is also in that state. And so on. In particular, for the Chinese room software to understand some Chinese symbol, it is not required that Jack also understand that symbol.

The fact that when Jack has “internalized” the Chinese room software, it is then spatially internal to Jack, is irrelevant. This just means that the Chinese room software and Jack’s software are being run on the same hardware (Jack’s brain). It does not mean that any states of the one are thereby states of the other.

In the functionalist’s view, what goes on when Jack “internalizes” the Chinese room software is this. Jack’s body then houses two distinct intelligent systems — similar to people with multiple personalities. The Chinese room system is intelligent. Jack implements its thinking (like the Mac emulation software implements the activities of some Android software). But Jack does not thereby think the Chinese room system’s thoughts, nor need Jack even be aware of those thoughts. Neither of the intelligent systems in Jack’s body is able to directly communicate with the other (by “reading each other’s mind,” or anything like that). And the Chinese room system has the peculiar feature that its continued existence, and the execution of its intentions, depends on the other system’s activities and work schedule.

This would be an odd set-up, were it to occur. (Imagine Jack trying to carry on a discussion with the Chinese room software, with the help of a friend who does the translation!) But it’s not conceptually incoherent.

Searle’s Positive View

Searle is not a dualist. He believes that thinking is an entirely physical process. He’s just arguing that the mere manipulation of formal symbols cannot by itself suffice to produce any genuine thought (that is, any non-derivative, unborrowed, “original” or “intrinsic” intentionality). Whether thinking takes place importantly depends on what sort of hardware is doing the symbol manipulation.

Some sorts of hardware, like human brains, clearly are of the right sort to produce thought. Searle thinks that other sorts of hardware, like the Chinese room, or beer cans tied together with string and powered by windmills, clearly are not of the right sort to produce thought — no matter how sophisticated the software they implement.

Are silicon-based computers made of the right kind of stuff to have thoughts? Perhaps, perhaps not. In Searle’s view, we do not know the answer to this. Maybe we will never know. Searle just insists that, if silicon-based computers are capable of thought, this will be in part due to special causal powers possessed by silicon chips. It will not merely be because they are implementing certain pieces of software. For any software the silicon-based computers implement can also be implemented by the Chinese room, which Searle says has no intentional states of its own (other than Jack’s intentional states).

Searle writes:

It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something with those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.

But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. (1980, p. 367)