A Response to John Searle’s Chinese Room Analogy
John Searle. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, vol. 3, p. 417-424 (1980).
In a famous 1980 paper titled “Minds, Brains and Programs”, the philosopher John Searle proposed a notorious thought experiment, now known as the Chinese Room, relating to the possibility of artificial intelligence. Searle has no objection to “weak AI”, the claim that a properly programmed computer can help teach us about the mind; but he opposes “strong AI”, the claim that a properly programmed computer actually would be a mind, with cognitive states just like those of humans, and purports to prove in this thought experiment that such a thing is impossible.
The Chinese Room is described as follows: Imagine that a person is locked in a room with a slot in the door. At regular intervals, a slip of paper covered with indecipherable squiggles comes through the slot. The person in the room looks up these squiggles in a book they possess, which instructs them to write a different set of squiggles on the paper and send it back out through the slot. As far as the person knows, they are just processing meaningless symbols; but unknown to them, the squiggles are Chinese characters, and they are actually carrying on a conversation with a Chinese speaker outside the room. The point of this analogy is that the person inside the room is acting just as a computer acts, processing symbols according to a set of rules. But this person does not understand what they are doing, and therefore a computer could never understand either. Searle concludes that a computer, even one that we could carry on a normal conversation with (i.e., a computer that could pass a Turing test) could never be conscious, could never understand, in the way that a human being does.
However, I do not agree with this analysis. I have just one request for Searle and his supporters: I want to see this marvelous book.
Even if we disregard the question of how unimaginably huge such a book would have to be, there are several categories of questions that it would seem no book, regardless of how much effort went into its creation, could give a correct and convincingly human-like answer to. For example, one could ask the same question multiple times; a human being would either rephrase the answer or become frustrated or both. Also, one could ask a question whose answer depends on contextual information (for example: “Would you please estimate how much time has passed since the beginning of our conversation?” or “Could you please rephrase the last question I asked?”).
If, as postulated by Searle, a Chinese Room can pass a Turing test, then it would have to be able to answer repetitive and context-dependent questions correctly. But if the Chinese Room works in the way described by Searle, this is not possible. A book containing a static list of questions and answers – in effect, a list of rules reading “If you see X, do Y” – will unfailingly advise Y every time it is confronted with X. Therefore, a Chinese Room could easily be unmasked by asking it the same question repeatedly and observing that it gives the same answer repeatedly. And it would be utterly helpless to answer context-dependent questions in a convincing way; it could only make vague, general statements which would be easily recognized as such. Either way, a Chinese Room masquerading as a conscious person could easily be detected, and thus could not pass a Turing test. It would neither be conscious nor seem to be conscious, and hence would say nothing at all about the feasibility of true artificial intelligence. That is why I ask Searle and his supporters, what does this book look like? How does it advise responding to queries such as these?
What if we modify the Chinese Room so that it could pass this test? What changes would we have to make?
In light of the above challenge, the first change is obvious. The book in our Modified Chinese Room (MCR) could no longer be just a simple lookup table – in other words, it could no longer be memoryless. It would have to store some kind of state, some information describing the questions it has seen and answers it has given so far. But note, also, that memory is a necessary component of consciousness. Consciousness requires some minimal continuity of experience; an agent with absolutely no memory, whatever its intellectual capabilities, could not be said to be conscious.
But the mere maintenance of that state would be useless if it could not affect the answers that the MCR gives. Therefore, the MCR could no longer be a static list of responses; it would have to perform some kind of computation, combining its background lexical knowledge with the state information already stored, to come up with answers to questions.
With these two new tools at its disposal, it would seem that the MCR could pass a Turing test including repeated and context-sensitive questions. But are we still certain that this system is not actually conscious? After all, it answers questions put to it by extracting relevant information from the question, adding this information to its remembered state, and processing both the state information and its own background knowledge to produce a coherent reply. This seems very much like what human beings do in the same circumstance. For one thing, how could the MCR ever “pick out” the relevant information from a query unless it, in some way, understood what was being said to it? Though it might still be argued that such a system would not be conscious, it is no longer obvious that it could not be conscious, which is what I seek to establish.
The Chinese Room is a type of philosophical thought experiment that Daniel Dennett refers to as “intuition pumps“, analogies that are designed to elicit an intuitive conclusion in a simple realm and then transfer that conclusion to a more complex domain. While intuition pumps are an appealing tool, they are frequently used to misdirect; very often, the conclusion drawn in the simple problem is not straightforwardly transferable to the more complicated problem. This is especially true in the domain of the mind, where our understanding is still so limited that “intuitions” about how such a system could or could not possibly work are as perilous as they are common. The true lesson of the Chinese Room is that we should not attempt to use our limited imaginations as a way to set bounds on reality.