For any disanalogy you think there is between humans and computers---whenever you're tempted to say "Computers will never be able to X"---ask yourself:
In the 1950 article "Computing Machinery and Intelligence," the philosopher Alan Turing puts forward a test we can use to determine whether machines are genuinely thinking. The test works like this. A human judge carries on remote conversations with a computer and another human, and has to guess which is the computer and which the human. If the computer is often able to fool the human judge into thinking it's human, then that computer passes the test, and Turing claims we should regard it as genuinely thinking, and having genuine intelligence. This has come to be known as the Turing Test for computer intelligence.
Note that Turing is only claiming that passing his test suffices for being intelligent. His test may be very hard; it may set the bar too high. Perhaps someday there really will be intelligent machines, which aren't intelligent enough to pass Turing's Test. Turing acknowledges this; he doesn't want to say that being able to pass his test is a necessary condition for being intelligent. He's only saying that the machines which are able to pass his test are intelligent.
Turing doesn't say very much about who's supposed to be judging these tests. But that's important, because it's very easy to fool computer neophytes into thinking that some program is really intelligent, even if the program is in fact totally stupid. One early computer program called ELIZA pretended to be a psychotherapist holding "conversations" with its patients. ELIZA was a very simple program. (In fact, you can nowadays get a version of ELIZA for your Palm Pilot.) Nobody who understands the program is at all tempted to call it intelligent. What the program does is this. It searches the user's input for keywords like the word "father." If it finds a keyword, it issues back some canned response, like "Do you often think about your father?" You can try ELIZA out for yourself. There's a version available on the web:
Here are some more recent and sophisticated programs which work by similar basic principles:
Well, as I said, no one who really understands ELIZA wants to claim that this program is intelligent. If we're ever going to construct a real artificial intelligence, it will take a much more sophisticated approach than was used to make ELIZA. But when Turing Tests are set up at public computing exhibitions, and the judges are just people taken off the street, people who aren't very familiar with computer programs of this sort, then programs like ELIZA end up fooling those judges about 1/2 the time.
Hence, if you want to say that passing the Turing Test really is a sign that some program is intelligent, than it's going to make a difference who's judging the Turing Test. We'll have to use better judges than just ordinary people off the street.
Turing considers a number of objections to his test. Let's talk briefly about two of the interesting objections.
One objection has to do with the fact that machines can't make mistakes. Turing introduces an important distinction here, between errors of functioning and errors of conclusion. Examples of errors of functioning would be mechanical or electrical faults that prevent a machine from doing what it's designed to do. We can also include "bugs" in the machine's software as errors of functioning. These prevent the machine from working as it's intended to work. Errors of conclusion, on the other hand, would be mistakes like saying "19" in response to the question "What is 8+9?" Now it is true that humans make many errors of this second sort; but Turing points out that there's no reason why machines shouldn't also make errors of this second sort. Whether a machine will make certain errors of conclusion really depends on the nature of its software. If we program the computer to add in the ways calculators do, and the computer executes its program perfectly, then it will always give the right answer to addition problems. But if we instead program the computer to do math in the ways that humans actually reason mathematically, then it might very well answer some addition problems incorrectly.
sam brown, explodingdog You might protest: But won't some low-level part of the computer still need to be adding and multiplying correctly, in order for the computer to run any program? Yes, but it's equally true that your low-level neurons need to add and process electrochemical signals properly, for you to be doing any thinking. That doesn't make you a perfect adder. You don't know what your neurons are doing. That neural activity might constitute your making an arithmetic mistake. Now, why can't the same be true for the computer?
People often say that if we ever succeed in constructing a real artificial intelligence, it will be much more "rational" and "logical" and "unemotional" than human beings are. I don't see why that's so. Why couldn't the AI be running software that makes it much less logical, and much more emotional, than human beings? What tends to happen is we think of the machine running smoothly and perfectly in the sense of not breaking down, suffering no errors of functioning. So we naturally assume that the machine won't make any errors of conclusion either. We naturally assume that it will always "do the rational thing." But that doesn't really follow. Whether the computer makes mistakes, and whether it "acts rational" or "acts emotional," will depend on the nature of its software....
A second objection that Turing considers has to do with the thought that a computer can only have a fixed pattern of behavior; it can only do what we program it to do.
In a sense this is true. What the computer does depends on what its program tells it to do. But that doesn't mean that the computer's behavior will always be fixed and rigid, in the way the ELIZA program's responses seem fixed and rigid. Computers can also be programmed to revise their own programs, to "learn from their mistakes." For instance, a chess-playing computer can be programmed to thoroughly analyze its opponents' strategy whenever it loses a game, and incorporate those winning strategies into its own database. In this way, it can learn how to use those strategies in its own games, and learn how to look out for them in its opponents' games. This kind of computer would get better and better every time it played chess. It would quickly become able to do things its programmers never anticipated (though of course they're the ones who programmed it to learn from its mistakes in this way). So it's not clear that computers have to be "fixed" and "rigid" in any sense that prevents them from being intelligent.
In any case, isn't there a sense in which our behavior is "fixed" and "rigid," too? Isn't our behavior dictated by our genetic and neurophysiological make-up, and the ways our senses are stimulated? That doesn't show that we're not intelligent. Why should it show that a computer can't be intelligent? We'll talk about this in more detail in future classes.
Suppose some computer program were able to successfully pass Turing's Test, even when the test is administered by sophisticated, knowledgeable judges. There are three responses one could have to this.