by Adam Lee on January 13, 2007

A Response to Ned Block’s “Blockhead”

In a classic 1981 paper titled “Psychologism and Behaviourism“, the philosopher Ned Block proposed a thought experiment that has been dubbed “Blockhead” in his honor. Block’s experiment has to do with the Turing test, itself a classic proposal on how to test for the presence of intelligence in a machine (or some other suitable non-human agent). The Turing test consists of a human, the judge, conversing via computer terminal with two agents. One of the agents is another human being; the other is a machine. If the judge cannot reliably tell which is which, the Turing test tells us that the machine should be considered to have the same intelligence as a human.

Block’s proposal was to build a computer program that would store within its memory banks a precomputed reply to every possible question it might conceivably be asked. Everything: from “What are your fondest memories of my Uncle Morris?” to “Do you prefer the smell of vanilla extract to gasoline?” to “Could you summarize the plot of Romeo and Juliet in a hundred words or less?” To many nonsensical queries, the program could be instructed to deliver an answer expressing confusion or bewilderment. Its meaningful answers might be written to consistently express a single perspective or outlook, thus simulating not just a person but a personality. Block’s assertion is that such a program, despite not being intelligent, could pass a Turing test.

Blockhead is one specific example from a family of philosophical thought experiment I like to call “lookup table consciousness”: imaginary constructs that simulate consciousness by maintaining a massive list of actions to take in response to every imaginable circumstance. The Chinese Room is another, although the two differ in that the Chinese Room is usually said to possess some rule set or program that transforms input into output, whereas a Blockhead stores every possible output explicitly. Proponents of the Chinese Room such as John Searle argue that no computer program or machine, however well-constructed, could ever be conscious in the same way a human being is. However, Block makes only the weaker claim that some structures could simulate consciousness without actually being conscious.

I have previously discussed reasons why the Chinese Room could not pass a Turing test, and therefore would not cast doubt on the claims to intelligence of any machine that could. Namely, because it is said to reply on a stateless set of condition-action rules, the Chinese Room could not adequately respond to repetitious or context-dependent questions. However, we could imagine a Blockhead that could do this. Rather than a simple list of queries and answers (which would be unmasked by the same strategem), we could imagine a Blockhead that stores every possible conversation in a form analogous to a branching tree, where each question and answer represents a decision point that branches out into an innumerable array of possibilities for the next query and reply. In such a scenario, every reply that is given depends on what has come before, and so there is a realistic possibility of answering context-sensitive questions correctly.

It should be clear just what a staggeringly impossible and pointless endeavor this is. Even if every atom of the visible universe was used for storage (if the entire cosmos was turned into computronium), we still would not have even a fraction of a fraction of the capacity that would be required to build a Blockhead, to say nothing of the unimaginable time it would take to compile a list of answers to every syntactically valid question. But of course, we are doing philosophy here, and it is logical possibility, not practicality, that is relevant in that field. Even if a Blockhead will never be built, if one was, what would that tell us?

To more clearly illuminate the principle at work, consider a different thought experiment: the Chance Conversation Machine. The CCM is another program designed to participate in a Turing test, one that takes input from a keyboard and sends output to a terminal. But the CCM makes no effort to create an intelligible response to its interrogator’s queries. No matter what input data it receives, it discards that data, generates a random stream of bits and outputs them to the screen.

Of course, the vast majority of the time this will result in total gibberish. But if the CCM’s output is truly random, all possible outcomes are guaranteed to occur eventually. Once in a great while, its random output will fall into the patterns that code for English characters. Once in an even greater while, these characters will form meaningful words. And once in an unimaginably enormous while, the CCM will apparently respond meaningfully to the most recent thing its interrogator said. It may send a response that dazzles us with penetrating insight, provoke gales of laughter at its razor wit, or respond to our troubles with understanding and sympathy. It may even seem to be aware that its output is purely random, and express its apparent regret that its next reply probably will not be so erudite.

Plainly, the CCM is not conscious, though it might occasionally seem to be so. Consciousness by definition requires genuine understanding of one’s situation, not merely meaningful output in response to it. The CCM has the latter, but not the former. For the same reason, a Blockhead is not conscious either. Neither of these imaginary constructs perform any actual analysis of their sensory data, and without analysis, there can be no genuine comprehension. The ability of analysis is not a sufficient condition for consciousness, but it is clearly a necessary one.

Where a Blockhead simulates consciousness using unrealistically enormous amounts of space, the CCM simulates consciousness using similarly unrealistically enormous amounts of time. Although logically we must account for bizarre possibilities like this, they are not realistic possibilities. It entails no self-contradiction to imagine them existing, but they could never actually be built. In particular, a Blockhead could not exist in our universe – there is not enough matter in the cosmos to store all its possible actions, and even if there were, the finite speed of light sets a finite horizon for communication that would make it impossible to access all the multibillion-light-year-distant memory banks that would need to be queried whenever the consciousness simulator was presented with a new challenge to react to.

Far more reasonable, given the evidence available to us and our knowledge of the underlying laws of physics, is that there are and can be no such things as Blockheads. Far more reasonable is that any agent, whether organic or mechanical, that can pass a Turing test can do so because it performs some process analogous to thinking and understanding. This does not provide a logically airtight proof that an agent that succeeds at a Turing test must be intelligent, but then again, when do we ever have that impossible degree of certainty about anything?

As I said, genuine understanding and not merely meaningful response is a necessary condition for consciousness, and the two are not logically required to go together. In the strictest sense, this is true. But in our imperfect, inductive world, we can depend on the latter to be a reliable indicator of the former. To the degree we believe our worldview is not the product of a Cartesian demon, concocting illusions to deceive us, we should similarly believe that anything that passes a Turing test is truly conscious and not merely a cunning simulation.