Spring 2016, NYU Abu Dhabi


Complaints against Reliabilism

Problem Cases

There are three sorts of counter-examples standardly offered as objections to reliabilist accounts of justification:

Technical Difficulties

The Generality Problem

The Generality Problem is the problem of specifying exactly which process it is whose reliability determines how justified your belief is. Any given belief you form was produced by a whole range of processes, of varying degrees of specificity. For instance, if you look out the window and form the belief that it's raining, all of the following are processes responsible for the formation of that belief:

These processes differ in how reliable they are. Which of them should we look at when we're assessing my belief that it's raining?

In "Reliability and Justified Belief" (an optional reading on reserve in Robbins Library), Richard Feldman argues that the reliabilist faces two dangers here: one danger threatens if he chooses too general a process, and the other danger threatens if he chooses too specific a process.

  1. If the reliabilist says that the justification of my belief depends on the reliability of some very general process, like vision, then he confronts Feldman's "No Distinction" worry.

    The problem here is that the set of beliefs formed on the basis of vision includes beliefs of obviously different epistemic status. For instance, my visually-based belief about the gender of a distant figure seen through a dirty window-pane is obviously less justified than my visually-based belief about the shape of a coin I scrutinize closely in good light. Any good account of justification should distinguish between these beliefs. It should not make them all come out to be equally justified. So we don't want to go along with the reliabilist and say that all beliefs formed by the process of vision are justified to the same extent.

  2. If the reliabilist says that the justification of my belief depends on the reliability of some very specific process, like the process of forming a belief that it's raining on the basis of seeing droplets splashing on the pavement just like that while looking through a window at exactly this angle, etc., then the reliabilist confronts Feldman's "Single Case" worry.

    The problem here is that if the process is extremely specific, then in all the history of the world there might have been only one belief formed by it--namely, my current belief that it's raining. Now, when we ask the question Is this process reliable? we're asking whether it tends to produce true beliefs. If the process is so specific that it has only ever produced a single belief, then whether or not it tends to produce true beliefs will just depend on whether or not this single belief is true. If the belief is true, then the process tends to produce true beliefs, and so it's reliable. If the belief is false, then the process tends to produce false beliefs, and so it's unreliable. Hence, whether or not the process is reliable seems just to depend on whether or not this single belief is true.

    The reliabilist tells us that a belief is justified iff the process by which it was produced was reliable. We've just seen an argument that, since the process we're considering is so very specific, whether or not that process is reliable depends on whether or not my current belief that it's raining is true. Hence, whether or not my belief is justified depends on whether or not it's true. If my belief is true, it's justified. If my belief is false, then it's unjustified. This seems an unacceptable result. Clearly there's a difference between being justified and being true. We think that it ought to be possible for a belief to be justified but nonetheless false. So this reliabilist strategy for selecting processes doesn't seem to work, either.

The Range Problem

The Range Problem is the problem of specifying where a process has to be reliable--in what range of possible environments?--in order for beliefs produced by it to count as justified.

So far, we've been assuming that for a subject S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in S's environment. This is why the reliabilist seems committed to saying that brains in vats can't have justified beliefs: most of the processes by which the brains form beliefs tend to produce false beliefs in their environment.

But perhaps the reliabilist can say instead that for S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in our environment, the environment we actually occupy. Then we can say that the brains in vats have justified beliefs, after all: for the processes they use to form beliefs are tend to produce true beliefs when used in our environment.

Or so we believe. However, what if it turns out that we are brains in vats? Then the processes by which we form beliefs are unreliable even in our own environment. So our beliefs wouldn't count as justified. (What's more, if we make our environment the place where a process has to be reliable, in order for the beliefs it produces to count as justified, then none of the beliefs produced by those processes will count as justified. Not even if the beliefs are formed in an environment in which the processes are reliable.) This doesn't seem a satisfactory result.

Here's another proposal: the reliabilist can say that for S's belief to count as justified, it has to be produced by a process which reliably produces true beliefs in worlds that work the way we think our world generally works. In his book Epistemology and Cognition, Goldman calls these "normal worlds." (This is not a very illuminating choice of terminology!) One of the general beliefs we have about the world is that we're not brains in vats, so the "normal worlds"will be worlds in which we're not brains in vats. (That might include our actual world, or it might not. It depends on whether we turn out to be brains in vats.) On the present proposal, beliefs formed by perception will count as justified iff they're produced by processes which reliably produce true beliefs in those "normal worlds." It's plausible that in any world which works the way we think our world generally works, perception will be reliable. Hence, any beliefs we form by perception count as justified, on this proposal--even if we turn out to be brains in vats.

Unfortunately, there are problems for this proposal, too.

Perhaps the reliabilist can overcome these difficulties. Or perhaps he can abandon the notion of "normal worlds" and offer some different answer to the Range Problem. In any case, it's clear that there are no easy and straightforward answers to this problem.

Regulating Our Beliefs

Another problem for the reliabilist concerns the regulative role we think justification ought to play in our epistemic inquiries.

You don't always have control over what you believe. But sometimes you do. And you have some control over what your epistemic habits are--and this indirectly affects which beliefs you end up with.

Now we want to have true beliefs. But we can't directly ensure that all our beliefs are true. (If we already knew what the truth was, then the question of what to belief would have already been settled!) What we can directly ensure is that our beliefs are justified or reasonable. This seems to us to be a good way to get true beliefs. If we make sure our beliefs are justified, then those beliefs are likely to be true.

On this picture, then, when we're deciding what to believe, or what sorts of epistemic habits to adopt, we aim to form beliefs which are reasonable, or epistemically likely to be true. In other words, how justified a belief is (or how justified it seems to us to be) plays a certain role in guiding and regulating our epistemic activities. The recipes we follow when deciding what to believe tell us to accept those beliefs which are justified, and to reject those beliefs which are unjustified.

But can justification play this regulative or belief-guiding role if an externalist account of justification is right? It's hard to see how it could.

Suppose you take on a new job at the nuclear power plant and I instruct you to press a certain button if the temperature of the reactor core goes above a certain point. You see a dial which is labeled "Reactor Core Temperature." You ask me, "So what you mean is, I should press this button whenever the indicator on that dial goes above that line?" Now suppose I respond, "No, that's not what I mean. That dial might not be working properly. I want you to press the button whenever the reactor core is above the danger point, regardless of what that dial says." You wouldn't know how to follow my instructions. I'm asking you to regulate your activities by a guide-post which you don't have access to, when performing those activities. It doesn't seem possible to do that.

The same lesson seems to apply in the epistemic case. When you're trying to decide what to believe, facts about the reliability or causal history of your beliefs don't seem like things you'd have access to. You would already have to rely on some beliefs about the external world, before you'd be entitled to an opinion about those matters. And you're trying to decide what beliefs to rely on. It seems that, while you're doing that, you can't guide your efforts by facts about the reliability or causal history of your beliefs. You can't regulate your choice of beliefs by any external guide-posts.

This seems to show that what justifies your belief has to be "internally available," if justification is going to play the regulative role we've described.

How might an externalist respond to this criticism?