Phil 340: Functionalism (Part 2 of 4)

Matching Input/Output is Not Enough

The name “functionalism” can be misleading. It makes it sound like what’s important about a mental state is what it’s for, that is, what purpose it serves. And perhaps you can interpret those slogans, and the functionalist ideas we’ve been exploring, so that this description fits. But it’s not obvious that you can. For example, a biologist might talk about the “function” of sexual attraction. It’s not obvious that they’d be talking about the same patterns of causal relations to other states, inputs, and outputs, that functionalists would think define feelings of attaction.

Another way in which the name “functionalism” can be misleading is that there’s a widespread tendency in math, logic, and philosophy to think of functions as mappings from inputs to outputs. (There are also uses of the concept “function” where that picture doesn’t fit, but the mapping from input to output picture is much more familiar.) And if we tried to apply that picture to what the functionalist is saying, it would severely distort their proposal.

Consider a human being Harriet. Over the course of her whole life, she receives a complex sequence of sensory inputs, and she responds with a complex sequence of behavior. That’s how her life in fact went.

Now consider another creature Glenda, who also receives the same sequence of inputs and responds with the same sequence of behavior. But the difference is that Glenda would have responded with that behavior no matter what input she was exposed to.

Glenda’s actual mapping from inputs to outputs is the same as Harriet’s. But the functionalists don’t want to say that Glenda has the same mental life as Harriet does. They don’t need to say that Glenda has any mental life at all. To be realizing or implementing the same program as Harriet does, it’s not enough to get the same actual input that Harriet does and respond with the same actual behavior. A creature also needs to be such that if the input had been different, it would respond in the same way that Harriet would have responded.

But even that is not enough. Because the functionalist doesn’t define mental states just in terms of mappings from inputs to outputs (not even merely possible inputs to outputs). They define them also in terms of how those states are causally related to each other.

On pp. 140 and 141 Kim describes two Turing Machines that have the same mapping from inputs to outputs, but use different strategies to achieve that. (The inputs are two base-1 numbers separated by a “+”, and the output is a third base-1 number which represents the sum of the inputs.) We can think of these as two algorithms for reaching the same result. The two algorithms will correspond to different machine tables, and thus, different programs.

For the functionalist, what algorithm you’re running can make a difference to what mental state you’re in — or even if you’re in any mental state at all. This is what Kim is getting at when he talks about Turing Machines that are merely “behaviorally equivalent” (they have the same mappings from possible inputs to outputs), and Turing Machines that are real machine descriptions of a system (these correctly model the algorithms that system uses to reach that input-output mapping).

Functionalists say that what mental states a system has depends on what machine tables are real descriptions of it, in this sense.

Thus a functionalist is allowed to say that even if some creature has sophisticated enough behavior to pass the Turing Test — and not just given some actual sequence of questions, but is flexible enough to generate convincing output for any possible inputs — that doesn’t yet guarantee that the system is intelligent. It matters what internal algorithm the system is using. Executing some algorithms may constitute having mental lives like ours. Others might consitute different mental lives. Still others might constitute no mental life at all. (For instance, if a chatbot is complex enough, it might pass the Turing Test. But if all it’s doing is looking up responses in a giant table, there’s probably no mental life going on. See Block pp. 381-384.)

Analytic vs Scientific Functionalism

As I said at the beginning, functionalism is not a single theory but a group of theories. Here are some choices for them to make.

Role vs Realizer Functionalism

Another choice that functionalists have to make is whether they want to identify mental states with the general roles they describe, or instead with the particular realizations of those roles that different creatures have. (Kim discusses this issue at pp. 183–189.)

Whenever we talk about the causal roles played by some device’s states, there are these two kinds or levels of state to talk about:

Here’s another example to help clarify this distinction. Recall the notion of a disposition, from our discussion of behaviorism. We can talk about role states and realizer states with dispositions. With the disposition of fragility, there is the state of having some underlying state or other which will cause the item to break when struck. This is the role state for fragility. All fragile objects share this role state. In different fragile objects, though, this role state is realized by different underlying facts. In a wine glass, the fragility is realized by a certain kind of crystal structure. In a soap bubble, it is realized by certain kinds of tension in the object. So the glass and the soap bubble are in different realizer states, but the same role state.

Another example to consider involves uncles and nieces. I and my brother Michael have a sister, who has a daughter we’ll call Jocelyn. My friend Peter also has a sister, who has a daughter we’ll call Rachel. Being an uncle requires there to be some sibling, who has a daughter (or a son). All three of us — me, Michael, and Peter — have this property in common. Even though for some of us, the sibling and daughter are different than they are for others. The relation of being a niece, on the other hand, is one that Jocelyn stands in to me and Michael, but not to Peter. And Rachel stands in it to Peter, but not to me and Michael.

In this example, being an uncle is like a role property: it’s something we three uncles all share. But some of us realize that state by having a niece Jocelyn, and others by having a niece Rachel. We don’t all share the same nieces. Peter has his niece, and Michael and I have ours. What we share with Peter is the more general status of being uncles. That’s something we have in common. It abstracts away from the particulars of who your nieces/nephews happen to be.

The functionalist says that mental states like pain are each associated with a certain causal role. In this case, too, we can distinguish between:

If a human, a squid, a Europan alien, and a mechanical computer are all systems correctly described by the functionalist’s definition of pain, then there will probably be different states playing the pain-role in each creature. This is like the nieces in our analogy. The nieces correspond to what’s different in these creatures. The status of being an uncle corresponds to what they have in common: namely that something plays the niece role for each of them.

A role functionalist identifies mental concepts (pain, believing that Charlotte is south of Carrboro, planning to make coffee, and so on) with the general roles. So they’ll say that pain is a single state that all of these creatures share. (Like being an uncle is a single status that Peter and I share.) A realizer functionalist identifies the mental concepts with the realizers instead. So they’ll say that that my pain is a different type of state than the squid’s pain, and so on. (Like Peter’s niece is a different person than my niece.) The different “pains” these creatures have can’t be shared, even if they “feel the same” to each of them, because they’re implemented in physically different ways.

Role functionalism has the advantage that it can allow that creatures with different physiologies or brain hardware share some mental states. (Perhaps squid often have different feelings and beliefs than we do, but do we want to rule out even the possibility that we have some mental states in common?) Those shared mental states may just be realized differently in the different creatures. This motivating idea that mental states are “multiple realizable” fits best with role functionalism.

On the other hand, realizer functionalism has its own advantages. We’re reading Armstrong this week advocating for a kind of functionalism, and this is the picture he was working with. We’ll also be reading an article by Lewis that follows this strategy, too. These philosophers want to retain the identity theorist’s picture that mental states are inner states that cause our behavior. And that sounds to them most like the states that realize the functionalist’s roles. As we’ll discuss in coming weeks, it’s not clear that the roles themselves can also make those causal claims.

Functionalism borrowed inspiration from both behaviorism and identity theory. When functionalism identifies mental states with roles, it ends up looking like an improved/more sophisticated form of behaviorism. It’s improved in that it allows many mental states to be defined at the same time, and it defines them also in terms of their interaction with inputs and each other, not only their interactions with behavioral output. But functionalism would still picture a mental state as a kind of complex disposition. On the other hand, when functionalism identifies mental states with realizers, it ends up looking like an improved/sophisticated form of identity theory. The realizer functionalists agree that we have the same complex dispositions that the role functionalists talk about, but they identify our mental states with the different categorical bases that implement those dispositions in different creatures.

For example, if what realizes the pain-role in humans is firing C-fibers, then the realizer functionalist will say that being in pain is just having firing C-fibers. Creatures without C-fibers can not be in that state. The role functionalist, on the other hand, says that pain is the state of having the pain-role played by some internal state or other. Having firing C-fibers is but one way to do this. Creatures without C-fibers can also be in this role state.

Although Lewis identifies pain with the realizer state for pain, he wants to allow that creatures who are biologically quite different from us can nonetheless be in pain. So what he suggests is the following. There is a single causal role associated with pain. For a given species or population S, we take the state which realizes that causal role for typical members of the species. This state will be pain-for-that-species. For instance, having firing C-fibers will be pain-for-humans. Other species will realize the causal role in different ways. So, on Lewis’s view, pain-for-those-other-species will be a different sort of state. (In Lewis’s discussion of this, the other species are aliens from Mars, so he calls this Martian pain.)

Note the difference between Lewis’s view and the role functionalist’s view. The role functionalist says that pain in humans is the very same state as pain in other species. It’s just that pain might be realized differently in the different species. Lewis, on the other hand, denies that pain is the same state in different species. He grants that each species’ state of pain plays the same causal role. But, on his view, pain in the one species is a different state than pain in the other species.