AI and the Turing Test for Machine Intelligence

One interesting question is whether there are any fundamental differences between human thought and any thought that computers will ever be capable of. The question we're concerned with is somewhat different, though. Our question is: Will computers ever be able to genuinely think at all? Will they ever really be able to have genuine thoughts, intelligence, self-consciousness, a real mental life? even if it might be different in some ways from our own?

For any disanalogy you think there is between humans and computers---whenever you're tempted to say "Computers will never be able to X"---ask yourself:

  1. Is having feature X something that's really necessary, if you're to genuinely think and be intelligent at all? Or is it just an idiosyncrasy of the way we happen to think?
  2. Would it be in principle impossible for a computer to be programmed to have X? Why? Why would it be harder to program a computer to have X than to program it to do other things?
  3. Why do you think you have better reason to believe other people have feature X than you could ever have that a computer has it?

In the 1950 article "Computing Machinery and Intelligence," the philosopher Alan Turing puts forward a test we can use to determine whether machines are genuinely thinking. The test works like this. A human judge carries on remote conversations with a computer and another human, and has to guess which is the computer and which the human. If the computer is often able to fool the human judge into thinking it's human, then that computer passes the test, and Turing claims we should regard it as genuinely thinking, and having genuine intelligence. This has come to be known as the Turing Test for computer intelligence.

Note that Turing is only claiming that passing his test suffices for being intelligent. His test may be very hard; it may set the bar too high. Perhaps someday there really will be intelligent machines, which aren't intelligent enough to pass Turing's Test. Turing acknowledges this; he doesn't want to say that being able to pass his test is a necessary condition for being intelligent. He's only saying that the machines which are able to pass his test are intelligent.

Turing doesn't say very much about who's supposed to be judging these tests. But that's important, because it's very easy to fool computer neophytes into thinking that some program is really intelligent, even if the program is in fact totally stupid. One early computer program called ELIZA pretended to be a psychotherapist holding "conversations" with its patients. ELIZA was a very simple program. (In fact, you can nowadays get a version of ELIZA for your Palm Pilot.) Nobody who understands the program is at all tempted to call it intelligent. What the program does is this. It searches the user's input for keywords like the word "father." If it finds a keyword, it issues back some canned response, like "Do you often think about your father?" You can try ELIZA out for yourself. There's a version available on the web:

Here are some more recent and sophisticated programs which work by similar basic principles:

Well, as I said, no one who really understands ELIZA wants to claim that this program is intelligent. If we're ever going to construct a real artificial intelligence, it will take a much more sophisticated approach than was used to make ELIZA. But when Turing Tests are set up at public computing exhibitions, and the judges are just people taken off the street, people who aren't very familiar with computer programs of this sort, then programs like ELIZA end up fooling those judges about 1/2 the time.

Hence, if you want to say that passing the Turing Test really is a sign that some program is intelligent, than it's going to make a difference who's judging the Turing Test. We'll have to use better judges than just ordinary people off the street.

Turing considers a number of objections to his test. Let's talk briefly about two of the interesting objections.

One objection has to do with the fact that machines can't make mistakes. Turing introduces an important distinction here, between errors of functioning and errors of conclusion. Examples of errors of functioning would be mechanical or electrical faults that prevent a machine from doing what it's designed to do. We can also include "bugs" in the machine's software as errors of functioning. These prevent the machine from working as it's intended to work. Errors of conclusion, on the other hand, would be mistakes like saying "19" in response to the question "What is 8+9?" Now it is true that humans make many errors of this second sort; but Turing points out that there's no reason why machines shouldn't also make errors of this second sort. Whether a machine will make certain errors of conclusion really depends on the nature of its software. If we program the computer to add in the ways calculators do, and the computer executes its program perfectly, then it will always give the right answer to addition problems. But if we instead program the computer to do math in the ways that humans actually reason mathematically, then it might very well answer some addition problems incorrectly.


sam brown, explodingdog
You might protest: But won't some low-level part of the computer still need to be adding and multiplying correctly, in order for the computer to run any program? Yes, but it's equally true that your low-level neurons need to add and process electrochemical signals properly, for you to be doing any thinking. That doesn't make you a perfect adder. You don't know what your neurons are doing. That neural activity might constitute your making an arithmetic mistake. Now, why can't the same be true for the computer?

People often say that if we ever succeed in constructing a real artificial intelligence, it will be much more "rational" and "logical" and "unemotional" than human beings are. I don't see why that's so. Why couldn't the AI be running software that makes it much less logical, and much more emotional, than human beings? What tends to happen is we think of the machine running smoothly and perfectly in the sense of not breaking down, suffering no errors of functioning. So we naturally assume that the machine won't make any errors of conclusion either. We naturally assume that it will always "do the rational thing." But that doesn't really follow. Whether the computer makes mistakes, and whether it "acts rational" or "acts emotional," will depend on the nature of its software....

A second objection that Turing considers has to do with the thought that a computer can only have a fixed pattern of behavior; it can only do what we program it to do.

In a sense this is true. What the computer does depends on what its program tells it to do. But that doesn't mean that the computer's behavior will always be fixed and rigid, in the way the ELIZA program's responses seem fixed and rigid. Computers can also be programmed to revise their own programs, to "learn from their mistakes." For instance, a chess-playing computer can be programmed to thoroughly analyze its opponents' strategy whenever it loses a game, and incorporate those winning strategies into its own database. In this way, it can learn how to use those strategies in its own games, and learn how to look out for them in its opponents' games. This kind of computer would get better and better every time it played chess. It would quickly become able to do things its programmers never anticipated (though of course they're the ones who programmed it to learn from its mistakes in this way). So it's not clear that computers have to be "fixed" and "rigid" in any sense that prevents them from being intelligent.

In any case, isn't there a sense in which our behavior is "fixed" and "rigid," too? Isn't our behavior dictated by our genetic and neurophysiological make-up, and the ways our senses are stimulated? That doesn't show that we're not intelligent. Why should it show that a computer can't be intelligent? We'll talk about this in more detail in future classes.

How to Interpret The Turing Test

Suppose some computer program were able to successfully pass Turing's Test, even when the test is administered by sophisticated, knowledgeable judges. There are three responses one could have to this.

  1. Response 1 says that computers can never have real thoughts or mental states of their own. They can merely simulate thought and intelligence. So all that passing the Turing Test proves is that the computer is a good simulation of a thinking thing.
  2. Response 2, on the other hand, thinks passing the Turing Test does give us good reason to think that a computer really has thoughts and other mental states. But passing the Turing Test would not guarantee that the computer has those mental states; it would only be good evidence that it has them. After all, passing the Turing Test is just a matter of behaving in certain ways, and it is a real step to go from something's behaving intelligently to its really being intelligent. With regard to really being intelligent, more things may matter than just how the machine passes certain behavioral tests. For example, perhaps some computer acts very intelligent, but then we learn it's just running a search-and-canned response program like ELIZA, with a big database of canned responses. We may retract our belief that the computer is genuinely intelligent, if we learn that that is what's going on. When a computer passes Turing's Test, though, it does give us some evidence for taking the computer to be intelligent, and not merely simulating intelligence.
  3. Response 3 goes even further. It says that passing Turing Test's suffices for being intelligent. According to this response, being able to respond to questions in the sophisticated ways demanded by the Turing Test is all there is to being intelligent. This is an example of what we call a behaviorist view about the mind. A behaviorist about intelligence says that all there is to being intelligent is behaving, or being disposed to behave, in certain ways in response to specific kinds of stimulation. If a tree behaves in those ways, the tree also counts as intelligent. When a human body no longer behaves in those ways (e.g., when it becomes a corpse), then we say there is no longer any mind "in" that body. We'll talk about this kind of view more in later classes.
If you favor the first of these responses, then ask yourself why you think you have good reason to believe that other human beings really have genuine thoughts and other mental states. Don't you have just the same kinds of evidence for thinking they are intelligent, as you'd have for thinking that some sophisticated computer program is intelligent. Namely, the ways they act and behave. Or do you think you have better evidence for thinking that other human beings are intelligent? If so, what is that better evidence?