There are two kinds of considerations that motivated the development of functionalism. One of these considerations is the possibility that a given mental state, like pain, might turn out to be correlated with different brain processes in different creatues. One response to that is to introduce different notions, like "pain-in-humans" and "pain-in-squid" and so on--and to treat these as identical to different brain processes--but to reject talk of "pain-in-general." The problem with that response is that we have the intuition that pain is a single kind of mental state that we might share with squid and other creatures. A different response is to say that there is a property of "pain-in-general," it's just that this property is realized in different ways in different creatures--in something like the way that a disposition like fragility can have different categorical bases in different objects. As you'll see, this kind of move plays a large role in functionalist theories.

The second consideration motivating functionalism comes from a problem that faced the behaviorist. When we discussed behaviorism, „„we saw that typically there won't be any one-to-one correlations between mental states and behavior. How one behaves depends on all the mental states one has. There's no such thing as the distinctive behavior associated with a single mental state in isolation.

This raised a threat of regress or circularity: it seemed impossible for the behaviorist to define any one mental state until he has already defined lots of other mental states.

Unlike the behaviorist, the functionalist believes that mental states are real internal states that cause our behavior. But the functionalist wants to retain the behaviorist's idea that there are close conceptual connections between our mental states and the kinds of behavior we usually express them with. So the functionalists also faces this threat of regress or circularity. How can he describe the connections between any one mental state and behavior, until he has already defined lots of other mental states?

There are two ways to avoid this threat of regress or circularity. The first is in terms of Turing Machines and machine tables; the second is in terms of the Ramsey/Lewis method for defining terms.

Functionalism in Terms of Turing Machines and Machine Tables

The philosopher Alan Turing (the same Alan Turing who proposed the Turing Test) was trying to better understand the intuitive notion of a problem's being solvable in a purely mechanical or automatic way. (E.g., by a clerk who follows a strict set of rules.) He made this vague intuitive notion mathematically precise, by introducing the idea of a very simple but also very powerful computer. This computer has an infinitely long tape, that can contain symbols written on it. There's a little cart that can go back and forth on the tape, reading what symbols are written at each position, and sometimes erasing them and replacing them with different symbols. The computer is allowed to take arbitrarily long to calculate its final answer, so long as it finishes in some finite amount of time. Nowadays, philosophers call these computers Turing Machines. We can prove that any problem solvable by any digitical computer we know how to make can also be solved by a Turing Machine. This makes Turing Machines very useful for philosophical and logical discussions of computers.

A Turing Machine isn't a real physical computer. It's an abstract mathematical model. We can't really build Turing Machines (since their tapes would have to be infinitely long). And even if we could build them, they wouldn't be very efficient. They can in principle execute any computer program we know how to write, but it would take them much longer to do so than the Macs and PCs we're more accustomed to.

Let's introduce a different but related way of modelling a computer. Suppose we're designing a piece of software to run a rudimentary Coke machine. Coke costs 15¢, and the machine only accepts nickels and dimes.

The Coke machine has three states: ZERO, FIVE, TEN. If a nickel is inserted and the machine is in state ZERO, it goes into state FIVE and waits for more input. If a nickel is inserted and the machine is already in state FIVE, it goes into state TEN and waits for more input. If a nickel is inserted and the machine is already in state TEN, then the machine dispenses a Coke and goes into state ZERO. If a dime is inserted and the machine is in state ZERO, it goes into state TEN and waits for more input. If a dime is inserted and the machine is already in state FIVE, then it dispenses a Coke and goes into state ZERO. If a dime is inserted and the machine is already in state TEN, then it dispenses a Coke and goes into state FIVE. (Alternately, if we wanted the machine to give change, it could dispense a Coke and a nickel and go into state ZERO.)
Notice how the system's response to input depends on what internal state the system is already in, when it received the input. When you put a nickel in, what internal state the Coke machine goes into depends on what its internal state was when you inserted the nickel.

Our Coke machine's behavior can be specified as a table, which tells us, for each combination of internal state and input, what new internal state the Coke machine should go into, and what output it should produce:

Input Present State Go into this State Produce this Output
NickelZEROFIVE-
FIVETEN-
TENZEROCoke
DimeZEROTEN-
FIVEZEROCoke
TENFIVECoke

In abstract terms, any computer program can be understood like this. Any system implementing the program has a number of different internal states. Implementing the program entails linking these internal states up in such a way that:

  • the system implementing the program will respond to input by changing its internal state in certain specified ways (this may depend on what internal state the system was already in, when it received the input)
  • when the system is in certain internal states, that will cause it to go into other internal states
  • when the system is in certain internal states, that will cause it to produce certain fixed outputs
We call tables of the sort spelled out above machine tables. You can think of them as a kind of very simple, but mathematically very powerful programming language. As it turns out, any computer program we know how to write, no matter how sophisticated, can be translated into this form. In particular, we can prove that there are systematic correlations between the programs that Turing Machines execute and these machine tables.

Notice that when we described our Coke machine, we didn't say what it was built out of. We just talked about how it works, that is, how its internal states interacted with each other and with input to produce output. Anything which works in the way we specified will suffice. What sort of stuff it's made out of doesn't matter.

In addition, we said nothing about how the internal states of the Coke machine were constructed. Some Coke machines might be in the ZERO state in virtue of having a gear in a certain position. Other Coke machines might be in the ZERO state in virtue of having a current passing through a certain transistor. We understand what it is for the Coke machine to be in these states in terms of how the states interact with each other and with input to produce output. Any system of states which interact in the way we described counts as an implementation of our ZERO, FIVE, and TEN states.

So our machine tables show us how to understand a system's internal states in such a way that they can be implemented or realized via different physical mechanisms. This point is put by saying that the internal states described in our machine tables are multiply realizable.

Computers are just very sophisticated versions of our Coke machine. They have a large number of internal states. Programming the computer involves linking these internal states up to each other and to the outside of the machine so that, when you put some input into the machine, the internal states change in predictable ways, and sometimes those changes cause the computer to produce some output (dispense a Coke).

From an engineer's point of view, it can make a big difference how a particular program gets physically implemented. But from the programmer's point of view, the underlying hardware isn't important. Only the software matters. That is, it only matters that there be some hardware with some internal states that causally interact with each other, and with input and output, in the way the software specifies. Any hardware that does this will count as an implementation or realization of that software.

The functionalist about the mind thinks that all there is to being intelligent, realy having thoughts and other mental states, is implementing some very complicated program. In a slogan, our brain is the hardware and our minds are the software. In us, this software is implemented by a human brain, but it could also be implemented on other hardware, like a Martian brain or a digital computer. And if it were, then the Martian brain and the computer would have real thoughts and mental states, too. So long as there's some hardware with some internal states that stand in the right causal relations to each other and to input and output, you've got a mind.

Note: The functionalist does not claim that running just any kind of computer program suffices for mentality. It has to be the right kind of program. A computer running an ELIZA program is not intelligent, for instance. A computer running the same "program" that the neurons in our brain run would be intelligent.

A line of reasoning often offered in support this view is the following:

  • At some level of description, the brain is a device that receives complex inputs from the sensory organs and sends complex outputs to the motor system. The brain's activity is well-behaved enough to be specifiable in terms of various (incredibly complicated) causal relationships, just like our Coke machine. We don't yet know what those causal relationships are. But we know enough to be confident that they exist. So long as they exist, we know that they could be spelled out in some (incredibly complicated) machine table.

  • This machine table describes a computer program which as a matter of fact is implemented by your brain. But it could also be implemented by other sorts of hardware. Why should the hardware make any difference to whether any thinking is going on? Wouldn't any other hardware, which did exactly the same job as your brain, in terms of the causal relations between its internal states, sensory inputs, and outputs to the motor system, yield as much mentality as your brain does? Why should the actual physical make-up of your brain be important?

  • Human neurons are very simple devices. If we replaced just one of your neurons with a tiny computer chip that performed the same job as the neuron it was replacing, doesn't it seem plausible that you would still have a mind, and continue to be capable of the same mental processes as before? You would keep walking and talking, just like before. Your brain would continue to process information just like before. None of your other neurons would even notice the difference! So how could the replacement make any difference to your mental life?

  • And if we could replace one neuron with a tiny computer chip, why couldn't we gradually keep replacing your neurons, one by one, until they were all replaced? Now there are no human neurons left; the jobs they were doing are now performed by tiny silicon chips. Over this process of gradual replacement, there doesn't seem to be any point at which you lose your ability to think. You continue walking and talking, just like before. Your "brain" continues to process information in the same way. It's plausible that even you wouldn't be able to tell that any change had taken place.
For these reasons, the functionalists think that the neurophysiological details of how your mental software is implemented are not important to whether you have real mentality. They think that any hardware which implements that software will also have a mental life, indeed the same mental life you have. Mental states, like the states of our Coke machine, can be implemented on many different sorts of hardware.

Problems with Turing Machines as Models of the Mind

These notions of Turing Machines and machine tables were historically very important in the development of functionalism. This was for two reasons.

  1. they showed us how to simultaneously define a system of internal states, which interact with each other and with input and output. This answers the circularity worries that plagued the behaviorist.

  2. they showed us how to understand the internal states so defined in such a way that they can be implemented or realized via different physical mechanisms. Turing Machines are computationally equivalent to machine tables, and a machine table's states can be realized in a variety of ways.

Originally functionalists said that our minds were Turing Machines, and that mental states like belief and desire were just different states of this Turing Machine, in the same way that ZERO and FIVE and TEN are different states of our Coke Machine.

But remember that Turing Machines and machine tables are just two ways (mathematically elegant way) of formally specifying a piece of software. They have some distinctive features that other ways of specifying a piece of software do not. (This is akin to the differences between different programming languages.) So even if the analogy between minds and software is correct at a general level, the attempt to spell this analogy out in terms of Turing Machines and machine tables might face special problems. And indeed it does.

  1. Machine tables are defined in such a way that they can be in only one state at a time. But typically we think of mental states as being states that a mind can be in several of, at the same time. For instance, one of my mental states is the belief that Harvard is in MA. Another mental state is the desire to finish writing up this web page. I am right now in both of these mental states. So we cannot identify mental states of this sort with states of a machine table.

  2. We cannot make any sense of different machine tables having states in common. If your Coke machine does not implement exactly the same machine table as my Coke machine, then it is not possible for our Coke machines ever to be in the same machine state. We cannot say that both of our Coke machines are in state ZERO, for instance. State ZERO is defined in terms of the entire machine table it's part of. If our Coke machines implement different machine tables, then all of their machine states must therefore be different. Now, it is likely that the software my brain is running is somewhat different than the software your brain is running. When exposed to the same environmental stimulation, even from birth, people come to be in different mental states. So if we identified our mental states with states of a machine table, it would then be impossible for you and I to have any of the same mental states. This is an absurd result. No doubt you and I differ in some of our mental states. But we also have many mental states in common.
Because of these difficulties, functionalists have moved away from Turing Machines and machine tables as models of the mind. Nowadays they spell out the analogy between minds and software in a slightly different way. This is what we will look at next.

 


URL:  http://www.courses.fas.harvard.edu
Last Modified: 
Copyright © The President and Fellows of Harvard College