Phil 340: AI and the Turing Test for Machine Intelligence

One interesting question is whether there are any fundamental differences between human thought and any thought that machines/computers/AIs will ever be capable of. The question we’re concerned with is somewhat different, though. Our question is: Will machines ever be able to genuinely think at all? Will they ever really be able to have genuine thoughts, intelligence, self-consciousness, a real mental life? even if it might be different in some ways from our own?

For any disanalogy you think there is between humans and machines — whenever you’re tempted to say “Machines will never be able to X” — ask yourself:

  1. Is having feature X something that’s really necessary, if you’re to genuinely think and be intelligent at all? Or is it just an idiosyncrasy of the way we happen to think?
  2. Would it be in principle impossible for a machine to be programmed to have X? Why? Why would it be harder to program a machine to have X than to program it to do other things?
  3. Why do you think you have better reason to believe other people have feature X than you could ever have that a machine has it?

In the 1950 article “Computing Machinery and Intelligence,” the philosopher Alan Turing puts forward a test we can use to determine whether machines are genuinely thinking. The test works like this. A human judge carries on remote conversations with a machine and another human, and has to guess which is the machine and which the human. If the machine is often able to fool the human judge into thinking it’s human, then that machine passes the test, and Turing claims we should regard it as genuinely thinking, and having genuine intelligence. This has come to be known as the Turing Test for machine intelligence.

Note that Turing is only claiming that passing his test suffices for being intelligent. His test may be very hard; it may set the bar too high. Perhaps someday there really will be intelligent machines, which aren’t intelligent enough to pass Turing’s Test. Turing acknowledges this; he doesn’t want to say that being able to pass his test is a necessary condition for being intelligent. He’s only saying that the machines which are able to pass his test are intelligent.

Naive judges, Chatbots

Turing doesn’t say very much about who’s supposed to be judging these tests. But that’s important, because it’s very easy to fool computer neophytes into thinking that some program is really intelligent, even if the program is in fact totally stupid. One early computer program called ELIZA pretended to be a psychotherapist holding “conversations” with its patients. ELIZA was a very simple program. (You can nowadays get a version of ELIZA even for bargain-level smartphones.) Nobody who understands the program is at all tempted to call it intelligent. What the program does is this. It searches the user’s input for keywords like the word “father.” If it finds a keyword, it issues back some canned response, like “Do you often think about your father?” Here is more background about this program.

More links on chatbots:

As I said, no one who really understands ELIZA wants to claim that this program is intelligent. If we’re ever going to construct a real artificial intelligence, it will take a much more sophisticated approach than was used to make ELIZA.

Some coversational/text-generating algorithms that are more sophisticated than chatbots:

When Turing Tests are set up at public computing exhibitions, and the judges are just people taken off the street, people who aren’t very familiar with computer programs of this sort, then chatbot programs using programs using the same underlying structure as ELIZA turn out to be able to fool those judges about 1/2 the time. (Here is an article about a chatbot that has successfully used such a strategy. Here is another more recent example.)

Hence, if you want to say that passing the Turing Test really is a good test for intelligence, than it’s going to make a difference who’s judging the Turing Test. We’d have to use better judges than just ordinary people off the street.

Objections Turing Discusses

Turing considers a number of objections to his test. Let’s talk briefly about two of the interesting objections.

One objection has to do with the fact that machines can’t make mistakes. Turing introduces an important distinction here, between errors of functioning and errors of conclusion. Examples of errors of functioning would be mechanical or electrical faults that prevent a machine from doing what it’s designed to do. We can also include “bugs” in the machine’s software as errors of functioning. These prevent the machine from working as it’s intended to work. Errors of conclusion, on the other hand, would be mistakes like saying “19” in response to the question “What is 8+9?” Now it is true that humans make many errors of this second sort; but Turing points out that there’s no reason why machines shouldn’t also make errors of this second sort. Whether a machine will make certain errors of conclusion really depends on the nature of its software. If we program the machine to add in the ways calculators do, and the machine executes its program perfectly, then it will always give the right answer to addition problems. But if we instead program the machine to do math in the ways that humans actually reason mathematically, then it might very well answer some addition problems incorrectly.

You might protest: But won’t some low-level part of the machine still need to be adding and multiplying correctly, in order for the machine to run any program? Yes, but it’s equally true that your low-level neurons need to add and process electrochemical signals properly, for you to be doing any thinking. That doesn’t make you a perfect adder. You don’t know what your neurons are doing. That neural activity might constitute your making an arithmetic mistake. Why can’t the same be true for the machine?

People often say that if we ever succeed in constructing a real artificial intelligence, it will be much more “rational” and “logical” and “unemotional” than human beings are. I don’t see why that’s so. Why couldn’t the AI be running software that makes it much less logical, and much more emotional, than human beings?


sam brown, explodingdog

What tends to happen is we think of the machine running smoothly and perfectly in the sense of not breaking down, suffering no errors of functioning. So we naturally assume that the machine won’t make any errors of conclusion either. We naturally assume that it will always “do the rational thing.” But that doesn’t really follow. Whether the machine makes mistakes, and whether it “acts rational” or “acts emotional,” will depend on the nature of its software…

A second objection that Turing considers has to do with the thought that machines only have fixed patterns of behavior; they can only do what we program them to do.

In a sense this might be true. What the machine does depends on what its program tells it to do. But that doesn’t mean that the machine’s behavior will always be fixed and rigid, in the way the ELIZA program’s responses seem fixed and rigid.

Here are three lines of thought pushing back against that inference.

  1. Computers can also be programmed to revise their own programs, to “learn from their mistakes.” For instance, a chess-playing computer can be programmed to thoroughly analyze its opponents’ strategy whenever it loses a game, and incorporate those winning strategies into its own database. In this way, it can learn how to use those strategies in its own games, and learn how to look out for them in its opponents’ games. This kind of machine would get better and better every time it played chess. It would quickly become able to do things its programmers never anticipated (though of course they’re the ones who programmed it to learn from its mistakes in this way).

    Similarly, the “Digients” in the Ted Chiang story we read earlier have to go through a loving process of child-rearing and education. They don’t have their behavior explicitly scripted ahead of time. But then they eventually grow up and make decisions their parents didn’t expect, and aren’t entirely comfortable with.

    If it’s possible for machines to learn and evolve in these ways their programmers can’t predict, why should we think that machines have to be “fixed” and “rigid” in a sense that prevents them from being intelligent?

    We’ll talk more about machines having all their behavior explicitly scripted ahead of time at the end of these notes, when we look at a difficult passage where Turing talks about rules that prescribe and guide all your conduct.

  2. As I said in the first class, we can also think about “machines” that never had any programmer, even ones who set up their initial framework. I told a story about a cloud of machine parts floating in space, that just through random happenstance come together in a way that turns out to work. What if such a machine turned out to be internally the same as some AI in a Russian Robotics Lab? Would we want to say that the machine that was explicitly programmed in the lab can’t have its own mental life or make its own choices, but the machine that randomly fell together does?

  3. One response to the machines-with-no-programmer idea is that regardless of whether there was a person who wrote the program, still this machine would be running a program. It’d still be predetermined how it will respond to every situation it encounters (that doesn’t break it). Even if no programmer or anyone else knows in advance what that response will be, still it’s already settled.

    Well, so what? Should that show that a machine can’t have its own opinions, ambitions, feelings? Should it show that it can’t make its own choices? Maybe all the same will turn out to be true for our responses, too. Maybe our behavior is dictated by our genetic and neurophysiological makeup, and the experiences we’ve had in our lives. That doesn’t show that we don’t have our own mental lives. So why should it show it for machines?

    Let’s talk about this more.

    Philosophers have a motion of the world being deterministic. This can be explained in several ways. Here’s the way I think is most useful.

    Certain laws of nature are such that, for a given “starting point” for the universe, those laws are compatible with only one subsequent future. Any world with that past but a different future would violate those laws. It couldn’t be a world that those laws described or governed. We call laws of this sort deterministic laws. Given a starting point, these sorts of laws require the future to proceed in a single fixed way. Any two possible worlds, if they started off the same way, and both had the same deterministic laws, must continue in the same way.

    Other laws of nature are compatible with more than one subsequent future. We call these indeterministic laws. These laws may say: given this starting point, the world can evolve in any of ways A, B, or C. None of these possible futures is guaranteed to happen.

    We will understand Determinism to be the thesis that all of the laws of nature that govern our world are deterministic laws.

    Is Determinism true? This has been disputed for a long time. Some of the ancient Greeks, especially the Stoics, thought that it was. The “clockwork” picture of the universe we get from Newton is also a Determinist one. In more recent physics, things are less clear. Our best theories of quantum physics are not obviously deterministic. They don’t say “in situations like this, so-and-so is guaranteed to happen.” Instead they involve probabilities in ways we can’t eliminate. However, it is philosophically controversial how the probabilities in those theories should be interpreted. Hence, one cannot easily say whether or not our best theories of quantum physics posit that our world has indeterministic laws.

    If you want to read more on debates about how to interpret the probabilities in quantum physics, this book is a good introduction to the issues.

    Even if you think Causal Determinism is false about our world, still, you will learn a lot about our concept of free will by investigating what would follow if Causal Determinism were true.

    Some philosophers hold the view that Determinism and free will are incompatible with each other. They think that if you’ve got one, you can’t have the other. We call these philosophers Incompatibilists. Other philosophers think that Determinism and free will are compatible. They think that it’s possible to have both. We call these philosophers Compatibilists. We can further sub-divide the Incompatibilists and the Compatibilists as follows:

    Incompatibilists
    Can’t have both free will and Determinism

     

    We have free will
    So Determinism is false (Libertarians)

    We lack free will
    Perhaps because
    Determinism is true (Hard Determinists),
    or perhaps we lack it even though Determinism is false

    Compatibilists
    Can have both free will and Determinism

     

    Soft Determinists
    Determinism is true and we have free will

    In a large poll of philosophy professors in 2009, 17% of the respondents favored Libertarianism, 15% favored us having no free will, and 56% favored Compatibilism. (The remaining 12% were undecided, thought the question was too unclear, or that there’s no fact of the matter, and so on.) Philosophy isn’t a popularity contest, so these numbers don’t tell us which view is right. But they do show how much controversy there is, and that each of these positions has some serious support.

    Wrapping up: it’s not entirely clear whether our universe is a Determinist one. Maybe all our behavior is predetermined too. There’s no obvious reason in principle why machines’ behavior would have to be more predetermined.

    So why should the fact that machines are “running a program” make us doubt these things about them?

    To read more about free will, see the optional reading links I posted in the course announcements.

How to Interpret The Turing Test

Suppose some computer program were able to successfully pass Turing’s Test, even when the test is administered by sophisticated, trained and knowledgeable judges. There are three responses one could have to this.

  1. Response 1 insists that machines can never have real thoughts or mental states of their own. They can merely simulate thought and intelligence. So all that passing the Turing Test proves is that the machine is a good simulation of a thinking thing.

  2. Response 2, on the other hand, thinks passing the Turing Test does give us good reason to think that a machine really has thoughts and other mental states. But passing the Turing Test would not guarantee that the machine has those mental states; it would only be good evidence that it has them. After all, passing the Turing Test is just a matter of behaving in certain ways, and it is a real step to go from something’s behaving intelligently to its really being intelligent. With regard to really being intelligent, more things may matter than just how the machine passes certain behavioral tests. For example, perhaps some machine acts very intelligent, but then we learn it’s just running a search-and-canned response program like ELIZA, with a big database of canned responses. We may retract our belief that the machine is genuinely intelligent, if we learn that that is what’s going on. When a machine passes Turing’s Test, though, it does give us some evidence for taking the machine to be intelligent, and not merely simulating intelligence.

  3. Response 3 goes even further. It says that passing Turing Test’s suffices for being intelligent. According to this response, being able to respond to questions in the sophisticated ways demanded by the Turing Test is all there is to being intelligent. This is an example of what philosophers call a behaviorist view about the mind. A behaviorist about intelligence says that all there is to being intelligent is behaving, or being disposed to behave, in certain ways in response to specific kinds of stimulation. If a tree behaves in those ways, the tree also counts as intelligent. When a human body no longer behaves in those ways (e.g., when it becomes a corpse), then we say there is no longer any mind “in” that body. We’ll talk about this kind of view more in later classes.

A difficult passage

When re-reading Turing’s article, I noticed this passage:

It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances… To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible. With all this I agree.

From this it is argued that we cannot be machines. I shall try to reproduce the argument, but I fear I shall hardly do it justice. It seems to run something like this, “If each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines.” The undistributed middle is glaring.

What the heck does that last sentence mean? I can’t expect you to know. I hope when you come across passages like this you will at least be able to work out from context what the author must in general be getting at. I hope it was clear that Turing doesn’t approve of the argument he’s reporting here, and that the passages that come next in his article—where he distinguishes between “rules of conduct” and “laws of behavior”—are meant to be part of a reply to the argument. Some of you may have been industrious enough to google the term “undistributed middle” to try to figure out more specifically what Turing was saying. (If so, great. That disposition will serve you well.)

What you will find is that this is a term from an older logical system. We don’t use the expression so much anymore—in fact I myself had to look up specifically which fallacy this is. An example of the fallacy of undistributed middle would be the argument “All newts are gross. Harry is gross. So Harry is a newt.” I hope that even without the benefit of any formal training in logic, you’ll be able to see that this is not a good form of argument. (Of course there can be instances of this form whose premises and conclusion are all true, but that doesn’t make this a good form of argument.)

Now I have to scratch my head and speculate a bit to figure out why Turing thought the argument he was discussing displayed this form. I don’t think it’s fair for him to say that the presence of this fallacy in the argument he reports is “glaring.” Here’s my best guess at what Turing is thinking.

We begin with the claim:

  1. If you had a definite set of rules of conduct by which you regulated your life — rules that prescribed and guided all your choices — you would be a machine.

As we’ll discuss more later, claims of the form “If D, then M” are always equivalent to “contrapositive” claims of the form “If not-M, then not-D.” (Compare: if Fido is a dog, then Fido is mortal. Equivalent to: if Fido is immortal, then Fido is not a dog.) So 1 is equivalent to:

  1. If you are not a machine (or as Turing puts it, if you are “better than” a machine), then you don’t have a definite set of rules that prescribe and guide all your conduct.

Now Turing is imagining that his opponents continue the argument like this:

  1. I don’t have a definite set of rules that prescribe and guide all my conduct.

  2. Therefore, I am not (or: I am “better than”) a machine.

The argument from 2 and 3 to 4 does display the fallacy of undistributed middle that we described above. Turing’s text doesn’t make this as explicit as he might have, though, since he writes the beginning premise in form 1 rather than the (equivalent) form 2, and he doesn’t explicitly include premise 3 in his reconstruction of the argument, but leaves it implicit.

Turing is imagining that even if some machines may have definite rules that explicitly script their conduct in every situation they encounter, others may not. The point of the passages that come next in his article are to distinguish between the idea of having such “rules of conduct” and there being low-level “laws of behavior” that settle in advance how the machine (or the human being) will respond to any given stimulus. Turing would agree that there are low-level laws of behavior strictly governing what the machine does, but there may be such laws for us too. He’d agree that we don’t have rules of conduct telling us what to do in every situation, but he’d say machines won’t necessarily have that either. Machines and we might both have to figure out what to do, rather than follow some recipe already written out in advance.

I think I understand the distinction he’s trying to make, but I’m not entirely sure that I do. How about you? Can you make sense of the idea that there may be some low-level laws of behavior (say your genes, and everything that’s happened to you up until this point in your life) that govern how you will act, even though you don’t have rules you consult that explicitly guide every choice you make? What more would you say to better explain this distinction? Can you make sense of the idea that some machine might also lack such rules of conduct?

There’s a lot here for us to wrestle with later. Hopefully though this will help you better track how the words Turing actually wrote here are supposed to fit into his larger argument.

Turing is a very interesting character who made huge contributions to several areas of thought, beyond what we’re looking at in class. If you read about his life, you’ll see he also had a hard time for being homosexual, and may have committed suicide as a result. Or his death may have been a tragic accident; from what I’ve read it seems to be unclear. In any event, much of our contemporary life has been profoundly shaped by his contributions.

Among Turing’s best-known contributions were: the notion of an (abstract) Turing Machine (we’ll talk about this later in the class) and other contributions to the foundations of logic and theory of computation; breaking the German ENIGMA code during World War II; and the development of early computers. If (like me) you find the last topic interesting and want to read more, here are some links: