Phil 340: Summary of Hofstadter’s Coffeehouse conversation

This is a summary of the full dialogue, which you should certainly read first, available at Mind’s I, Ch 5

[A dialogue between Chris the physicist, Pat the biologist, and Sandy the philosopher]

Early in the conversation, C talks about things computers will never be able to do, like writing a novel.

S introduces Turing. He/she mentions the question “Will machines ever think?” and says Turing thought this question was too emotionally provocative and perhaps even “meaningless” as it stands; we’d do better to replace it with a more specific question, like “Will machines ever pass the Turing Test?”

P points out that if a man passes Turing’s game of imitating a woman, that doesn’t prove he’s a woman. So why should we think if a machine passes the Turing Test, that shows it can think?

S: We couldn’t conclude that the man was a woman, but perhaps we could conclude that he has insights into “feminine mentality.” So why can’t we conclude that a machine that passes the Turing Test has insights into human mentality? Having insights is one form of thinking.

P: Why are we testing the machine’s ability to fool people by typing at them? Why not its ability to dance?

S: discoursing intelligently about arbitrary subjects is a better guide to whether someone can think.

C+P: passing the Turing Test just shows that something can simulate thought in a computer; when we simulate hurricanes in a computer, nobody really gets wet.

S: (i) you’re only simulating certain aspects of the hurricane, not the whole thing; (ii) if we simulated little people too, wouldn’t they get wet inside the simulation?

C: Sounds like you’re willing to call anything a hurricane, as long as its effects have the same structure as “floods and devastation.”

S: If you’re communicating over ham radio via Morse code, you might think about “the person at the other end,” but to see this person your thoughts have to do some decoding and interpreting. Same with the simulated hurricane.

P: But there really is a real person behind the Morse code; there isn’t a real hurricane behind the computer bits.

S agrees there isn’t really any hurricane when we simulate them in computers, but thinks that discussing it is useful preparation. Some lessons he/she wants us to learn from thinking about hurricanes:

  1. what do real hurricanes have in common? An abstract pattern or organization; also there’s no sharp distinction between them and other kinds of storms. They discuss the weather on Jupiter and on the surface of a neutron star.
  2. as with numbers and hurricanes, we can extend concepts from familiar cases to less familiar cases that have the same essence.

S: Thought, even more than hurricanes, is an abstract structure. Earlier he/she had also observed that some processes, like adding, are such that simulating them counts as doing them.

If physically different brains can support the same thinking, it’s the pattern not the medium that’s important. So why can’t there be a very different medium (like a computer) with the same pattern, and it also thereby be thinking?

P+C: How can you tell a machine has the same patterns, just because it passes the Turing Test? You only see the outside.

S: But I only see your outside, too. Other minds are like targets in the physics lab that we can’t see directly, but only by how they interact with our measurement devices.

P: With other people, we can also watch their faces and expressions.

S: Why should that matter for whether it’s reasonable to attribute thinking to them?

P: Another reason it’s reasonable to attribute thinking to other people is we have the same biological origin.

S: That’s chauvinistic. What’s important to thinking is similarity of internal organization/structure/software, not our biological or chemical makeup. The Turing Test looks beyond external form.

P: I’m not saying it’s the same outward shape, but our both coming from the same kind of DNA, and having the same biological history.

S: That may be indirect evidence we’re both thinking, but isn’t the Turing Test more direct?

C: But just as a man might fool a judge in Turing’s imitation game, maybe a machine could fool the judge in the Turing Test.

S: Perhaps if the judge were too quick or uncareful.

The characters then discuss different things they should test, like:

At this point in the discussion, C+P have become somewhat more sympathetic to the idea that machines that pass the Turing Test might be thinking.

C: Couldn’t there be a thinking creature without emotions, or at least without outward emotional responses?

S: Maybe such a thing could play a good game of chess, but I wouldn’t call it conscious.

C: Don’t chess programs look ahead to figure out their next moves, and so need some representation of their own states and choices? Isn’t that some kind of self-awareness?

S: A chess program has no concept of why it’s playing chess, of the fact that it is a program, that it has an opponent. It has no idea what winning and losing are.

P: How do you know what it feels or knows?

S: I can’t prove some things (for example, that thrown stones don’t know anything), but they’re still reasonable.

S introduces the ideas of solipsism (only I am conscious) and the extreme opposite view panpsychism (everything is conscious). P expresses some sympathy for panpsychism.

S tells us more about how chess programs work, and insists they are too simple to be thinking.

S+P discuss how we “project” talk about desires and trying onto chess-playing computers, ants, cats and dogs. In the last cases, P thinks the creatures really do have desires and feelings, just not as complex and deep ones as humans.

C repeats: Why can’t there be an intelligence without feelings:

S: (1) any intelligence needs motivations. It has to filter or prioritize its inputs, and choose which points to be interested in. These priorities and choices come from its emotional biases and so on.

(2) feelingless calculations do exist, in dumb calculators. It’s only when you put a bunch of those together, in a big and complex enough system, that you get a system with desires and beliefs.

The characters discuss the notion of taking the intentional stance towards something.

C: Should we take the intentional stance towards animals? They couldn’t pass the Turing Test. So do we have other ways to test for the presence of thought?

S: OK, there could be lower levels of consciousness that can’t yet pass the Turing Test.

P: People aren’t just machines with gears. They have a creative flame.

S: So too do some screensavers. Why think of the computer as just a lifeless steam-shovel? It can have dancing sparkly patterns.

S: There are different things you could mean by asking “Can computers think?” Can present-day computers think with existing programs? No, and probably they couldn’t think even with better programs. Over time, the public will get more familiar with computers. Different kinds of computers will evolve: for business, for school, for industry, for studying intelligence. The last ones might get sensory systems and locomotion, but no reason to think they’ll look like C-3PO.

P: What’s inside cells is wet and slippery. What’s inside machines is dry and rigid. Computers don’t make mistakes, and they only do what you tell them.

S: Computers can make wrong weather predictions.

P: Only because you fed it the wrong data.

S: Not necessarily. Weather prediction is so complex it requires extrapolation and guesswork. So even if the computer gets the right data it can make a wrong prediction. As computers evolve, they’ll model more messy phenomena where the correct answers aren’t mathematically derivable from the inputs. We can call this becoming “wetter and slippier.” On the other hand, DNA and enzymes act in a dry and rigid way, don’t they?

P: Yes but when they work together, it’s so complex that lots of unexpected things happen. All the complex mechanicalness adds up to something very fluid.

S: Just like machines.

C: If we’re basically a kind of machine ourselves, why can’t we recognize it?

S: Emotions interfere. Also we have a hard time conceptually jumping between “levels.”

P: Military missiles might someday decide to be pacifist and refuse to blow up.

S: There are varying degrees of passing the Turing Test; it’s not black-and-white. It also depends on how sophisticated the judge is.

S discusses the idea of computer programs that work like a jukebox, just selecting canned responses. He/she points out that this would require astronomical memory to pass the Turing Test. Also points out that enemies of AI imagine this kind of program when arguing that no machine that passes the Turing Test would really be thinking.