There are two kinds of considerations that motivated the development of functionalism. One of these considerations is the possibility that a given mental state, like pain, might turn out to be correlated with different brain processes in different creatues. One response to that is to introduce different notions, like "pain-in-humans" and "pain-in-squid" and so on--and to treat these as identical to different brain processes--but to reject talk of "pain-in-general." The problem with that response is that we have the intuition that pain is a single kind of mental state that we might share with squid and other creatures. A different response is to say that there is a property of "pain-in-general," it's just that this property is realized in different ways in different creatures--in something like the way that a disposition like fragility can have different categorical bases in different objects. As you'll see, this kind of move plays a large role in functionalist theories.
The second consideration motivating functionalism comes from a problem that faced the behaviorist. When we discussed behaviorism, „„we saw that typically there won't be any one-to-one correlations between mental states and behavior. How one behaves depends on all the mental states one has. There's no such thing as the distinctive behavior associated with a single mental state in isolation.
This raised a threat of regress or circularity: it seemed impossible for the behaviorist to define any one mental state until he has already defined lots of other mental states.
Unlike the behaviorist, the functionalist believes that mental states are real internal states that cause our behavior. But the functionalist wants to retain the behaviorist's idea that there are close conceptual connections between our mental states and the kinds of behavior we usually express them with. So the functionalists also faces this threat of regress or circularity. How can he describe the connections between any one mental state and behavior, until he has already defined lots of other mental states?
There are two ways to avoid this threat of regress or circularity. The first is in terms of Turing Machines and machine tables; the second is in terms of the Ramsey/Lewis method for defining terms.
Functionalism in Terms of Turing Machines and Machine TablesThe philosopher Alan Turing (the same Alan Turing who proposed the Turing Test) was trying to better understand the intuitive notion of a problem's being solvable in a purely mechanical or automatic way. (E.g., by a clerk who follows a strict set of rules.) He made this vague intuitive notion mathematically precise, by introducing the idea of a very simple but also very powerful computer. This computer has an infinitely long tape, that can contain symbols written on it. There's a little cart that can go back and forth on the tape, reading what symbols are written at each position, and sometimes erasing them and replacing them with different symbols. The computer is allowed to take arbitrarily long to calculate its final answer, so long as it finishes in some finite amount of time. Nowadays, philosophers call these computers Turing Machines. We can prove that any problem solvable by any digitical computer we know how to make can also be solved by a Turing Machine. This makes Turing Machines very useful for philosophical and logical discussions of computers.
A Turing Machine isn't a real physical computer. It's an abstract mathematical model. We can't really build Turing Machines (since their tapes would have to be infinitely long). And even if we could build them, they wouldn't be very efficient. They can in principle execute any computer program we know how to write, but it would take them much longer to do so than the Macs and PCs we're more accustomed to.
Let's introduce a different but related way of modelling a computer. Suppose we're designing a piece of software to run a rudimentary Coke machine. Coke costs 15¢, and the machine only accepts nickels and dimes.
The Coke machine has three states: ZERO, FIVE, TEN. If a nickel is inserted and the machine is in state ZERO, it goes into state FIVE and waits for more input. If a nickel is inserted and the machine is already in state FIVE, it goes into state TEN and waits for more input. If a nickel is inserted and the machine is already in state TEN, then the machine dispenses a Coke and goes into state ZERO. If a dime is inserted and the machine is in state ZERO, it goes into state TEN and waits for more input. If a dime is inserted and the machine is already in state FIVE, then it dispenses a Coke and goes into state ZERO. If a dime is inserted and the machine is already in state TEN, then it dispenses a Coke and goes into state FIVE. (Alternately, if we wanted the machine to give change, it could dispense a Coke and a nickel and go into state ZERO.)Notice how the system's response to input depends on what internal state the system is already in, when it received the input. When you put a nickel in, what internal state the Coke machine goes into depends on what its internal state was when you inserted the nickel.
Our Coke machine's behavior can be specified as a table, which tells us, for each combination of internal state and input, what new internal state the Coke machine should go into, and what output it should produce:
In abstract terms, any computer program can be understood like this. Any system implementing the program has a number of different internal states. Implementing the program entails linking these internal states up in such a way that:
Notice that when we described our Coke machine, we didn't say what it was built out of. We just talked about how it works, that is, how its internal states interacted with each other and with input to produce output. Anything which works in the way we specified will suffice. What sort of stuff it's made out of doesn't matter.
In addition, we said nothing about how the internal states of the Coke machine were constructed. Some Coke machines might be in the ZERO state in virtue of having a gear in a certain position. Other Coke machines might be in the ZERO state in virtue of having a current passing through a certain transistor. We understand what it is for the Coke machine to be in these states in terms of how the states interact with each other and with input to produce output. Any system of states which interact in the way we described counts as an implementation of our ZERO, FIVE, and TEN states.
So our machine tables show us how to understand a system's internal states in such a way that they can be implemented or realized via different physical mechanisms. This point is put by saying that the internal states described in our machine tables are multiply realizable.
Computers are just very sophisticated versions of our Coke machine. They have a large number of internal states. Programming the computer involves linking these internal states up to each other and to the outside of the machine so that, when you put some input into the machine, the internal states change in predictable ways, and sometimes those changes cause the computer to produce some output (dispense a Coke).
From an engineer's point of view, it can make a big difference how a particular program gets physically implemented. But from the programmer's point of view, the underlying hardware isn't important. Only the software matters. That is, it only matters that there be some hardware with some internal states that causally interact with each other, and with input and output, in the way the software specifies. Any hardware that does this will count as an implementation or realization of that software.
The functionalist about the mind thinks that all there is to being intelligent, realy having thoughts and other mental states, is implementing some very complicated program. In a slogan, our brain is the hardware and our minds are the software. In us, this software is implemented by a human brain, but it could also be implemented on other hardware, like a Martian brain or a digital computer. And if it were, then the Martian brain and the computer would have real thoughts and mental states, too. So long as there's some hardware with some internal states that stand in the right causal relations to each other and to input and output, you've got a mind.
A line of reasoning often offered in support this view is the following:
Problems with Turing Machines as Models of the MindThese notions of Turing Machines and machine tables were historically very important in the development of functionalism. This was for two reasons.
Originally functionalists said that our minds were Turing Machines, and that mental states like belief and desire were just different states of this Turing Machine, in the same way that ZERO and FIVE and TEN are different states of our Coke Machine.
But remember that Turing Machines and machine tables are just two ways (mathematically elegant way) of formally specifying a piece of software. They have some distinctive features that other ways of specifying a piece of software do not. (This is akin to the differences between different programming languages.) So even if the analogy between minds and software is correct at a general level, the attempt to spell this analogy out in terms of Turing Machines and machine tables might face special problems. And indeed it does.