Recall the sheep-in-the-meadow case we were discussing before:
You're in the meadow, and you see a rock which looks to you like a sheep. So you say to yourself "There's a sheep in the meadow." In fact there is a sheep in the meadow (behind the rock, where you can't see it).
This seems to be a case where you have a justified true belief that there's a sheep in the meadow, that fails to be knowledge. Now, one salient feature of this case is that you can't really see the sheep. You just think that you do. The fact that there really is a sheep in the meadow, which you don't see, seems to be just a gratuitous accident. It doesn't have anything to do with your belief or your evidence for your belief.
Another example to the same effect:
You look at a shelf. An evil neuroscientist has some electrodes wired up to your brain, and he causes you to have visual experiences as if there were a clock on the shelf. As it turns out, there really is a clock on the shelf. (But you would still be having clock-experiences even if there weren't.)
In that case, it seems like you have a justified belief that there is a clock on the shelf, and it's true that there is a clock on the shelf, but you don't know that there is a clock there. Relative to what you really see, it's just an accident that there happens to be a clock there.
One solution to the Gettier Problem suggested by these cases is to say that you know that P iff you truly believe that P and it's not an accident that you are right about P.The difficulty for this view is to explain what we mean by "it's being an accident that you are right about P." This is especially difficult because not every accident generates a Gettier case.
Tommy comes across some evidence that his wife is sneaking around. She isn't at work when she's supposed to be, he finds some matches from a fancy nightclub in her car, and so on. When he asks his wife where she's been, she is evasive. This gives him some evidence for believing that his wife is having an affair. As a matter of fact, there is a simple explanation for all the evidence that Tommy has encountered: his wife is planning a surprise birthday party for him. But as it also turns out, his wife is having an affair, with her old boyfriend in Chicago. But she's very discreet about it, and so hasn't left any clues lying around.
This is a case where Tommy has a true belief that his wife is unfaithful, and it's just an accident that his belief is true. So far, so good. According to the present proposal, Tommy doesn't know that his wife is unfaithful. And that seems intuitively to be the correct thing to say about this case.
But now consider a second case.
Tommy goes on a business trip to Chicago, checks into his hotel, and goes up to his room. By accident, he gets off the elevator on the wrong floor, and opens the wrong door. But the door does open, and there is his wife, in bed with her old boyfriend.
In this case, too, Tommy has a true belief that his wife is being unfaithful to him. And in this case, too, it seems to be just an accident that Tommy has a true belief about this. But in this case, we do want to say that Tommy knows that his wife is being unfaithful.
So the problem is to explain the difference between the kinds of "accidents" illustrated in the first case, which do block your true beliefs from counting as knowledge, and the kinds of "accidents"illustrated in the second case, which don't. This is difficult to do. (You may be able to explain the difference between these two particular examples that I've given. What is harder is to come up with an explanation of the differences between all the cases where your true beliefs don't count as knowledge, and all the cases where they do.)
Let's consider a different way of explaining these Gettier Cases. This new solution says that you know that P iff you truly believe that P and you have evidence that P, and the fact that P is causally connected in the right way with your belief or your evidence. (Sometimes this solution omits the reference to "evidence" altogether, and just talks about causal connections between the fact that P and your belief.)
This sounds promising. In the sheep-in-the-meadow case, the problem was that the real sheep played no role in causing your belief that there was a sheep in the meadow. That's why we want to say it's just an accident that you get things right. In the Nogot/Haveit case, Haveit's Ford-ownership likewise plays no role in causing your belief.
When we consider questions of the form "Why is so-and-so's belief correct?" and questions of the form "Why did so-and-so act that way?" there are two kinds of answers we can give.
One kind of answer is a causal answer. This is a case where there is some reason or causal explanation why the belief is true, or why the agent acted as he did, but these need not be reasons the agent is aware of.
A second kind of answer is a rationalizing answer. This has to be some reason that the agent has for believing what he does, or acting in the way he does.
Suppose I say "Sue went to the fridge because the levels of such-and-such chemicals in her bloodstream went down." Here I could be telling either kind of story. If I just mean: the levels of the chemicals in Sue's bloodstream went down, and that made Sue feel a bit hungry, etc., then I'm just giving a causal answer to the question "Why did Sue go to the fridge?" Sue need not have any idea about the chemicals in her bloodstream. Alternatively, suppose that Sue is very worried about the chemicals in her bloodstream, so she has a monitor plugged into her arm. When the levels of certain chemicals drop, Sue sees this right away on the monitor, and she goes to the fridge to get some food so that she can make the chemicals return to the normal level. There the story about the chemicals does report Sue's own reason for going to the fridge. So that is a rationalizing answer to the question "Why did Sue go to the fridge?"
Suppose you and your friend are climbing a mountain, and at one point you're holding the rope while your friend climbs up below you. As you stand there, you remember something your friend did that really pissed you off. A smoldering desire to kill your friend starts to grow in your mind. Of course, you would never act on that desire. But it shocks you and horrifies you that you would even think of it, and this makes you so nervous that you unintentionally lose your grip on the rope and your friend plummets to his death. We can say "Your desire to kill your friend was one of the reasons why you let the rope slip." But this is only true if we're citing the causal reasons why you let the rope slip. It's not like you intentionally let go of the rope, in order to kill your friend. If you had intentionally let go of the rope, in order to kill your friend, then we could say that your desire to kill your friend played a role in rationalizing what you did.
(Sometimes when people talk about "rationalizations," they only mean to talk about good rationalizations. Here I'm using the notion of "rationalization" in a broader sense, to include both good and bad rationalizations.)
The preceding two cases concerned rationalizations of someone's actions. Let's consider a case where something rationalizes someone's belief.
Suppose Kurt is intensely paranoid. He always believes someone is trying to kill him. You ask me why he believes this, and I say "Because he has such-and-such a defect in his brain." It would be most natural to understand me here as giving a causal explanation of Kurt's belief. But depending on the circumstances, I might also be giving Kurt's own reason for believing what he does. Perhaps Kurt has discovered the defect in his brain, and in his paranoid way, he takes it to be evidence that someone is tampering with his body. He takes it to be evidence that someone is trying to kill him. In that case, it would play a role in rationalizing Kurt's belief. (Again, this need not be a good rationalization.)
(As this last case brings out, one and the same thing might be both a cause of Kurt's belief and something that Kurt takes to be evidence for his belief. So there is no rule that if something is a causal reason why you believe what you do, it can't also play a role in rationalizing your belief.)The present solution to the Gettier Problem says that what we have to add to a justified belief, to get knowledge, is not more in the way of rationalizations, but rather more in the way of the right sort of causal connections between your belief and the facts in the world that make it true.
When we began this class we were focusing on what evidence a subject has, and how that evidence rationalizes the subject's belief and enables her to have knowledge.
But in the past few classes we've been moving away from that. We've looked at Relevant Alternatives Theories that say that what you know depends on what alternatives are relevant; and that can depend in part on what your environment is like, independently of your having any evidence that it's that way. We've just now entertained a response to the Gettier Problem that emphasizes how your belief is caused, rather than what evidence rationalizes your belief.
Goldman's account of perceptual knowledge pursues both of these ideas.
According to the Relevant Alternatives Theories we've been considering so far, eliminating an alternative Q, or "ruling that alternative out," turned on what sort of evidence you have against Q. Goldman rejects this assumption. He says you don't need to have evidence against Q to eliminate it; it suffices if you're able to "perceptually discriminate" the situation in which P obtains from the one in which Q obtains.
Suppose there are two twins, Judy and Trudy. Suppose moreover that you're usually able to tell them apart. You don't know how you do it, but when you're confronted with one of the twins, you're usually right in your beliefs about which twin it is. You may not be aware of any evidence you use to tell the two apart. Instead, there are subtle differences in their faces and the way they walk that your brain picks up and processes, without any conscious intervention or assistance from you. The details of these cognitive mechanisms are hidden from you. For all you can tell, you just end up with beliefs about which twin is which, and these beliefs tend to be reliable. They're right much more often than not. In such a case, you're able to discriminate between Judy and Trudy although you're not in conscious possession of any evidence that you use to tell them apart.
The view Goldman wants to defend says the following:
Whether you have perceptual knowledge that P is a matter of whether you can reliably discriminate P from its alternatives.
You don't have to discriminate P from all its alternatives, but only from the relevant alternatives.
Reliably discriminating P from certain alternatives doesn't require you to have evidence telling against those alternatives. It only requires you to have cognitive mechanisms which tend to produce correct beliefs about whether P or the alternatives obtain.
As in the Judy/Trudy case, you don't have to know how you discriminate P from its alternatives; nor do you have to know that the method you use is reliable. There just has to be some method, and it just has to be reliable.
So the basic idea of Goldman's view will be this: if you truly believe that P on the basis of your perceptual experiences, then you perceptually know that P, unless there's some alternative to P which is both relevant and which you can't reliably discriminate from P.
Goldman's Two Analyses
Goldman's first attempt to capture his basic idea goes as follows:
S perceptually knows that P iff:
1. P is true.
2. S has a perceptual belief that P.
3. There is no alternative Q which is both relevant and which is such that, if Q obtained, S would still have a perceptual belief that P.
This analysis seems to capture Goldman's basic idea in many cases. For example, suppose S looks out the window of his car and sees a barn. S forms the true belief that there's a barn. One alternative would be the hypothesis that it's just an empty field. However, if that alternative obtained, then S would have different experiences and so he would no longer believe that there's a barn. Hence, S is able to discriminate this alternative from what believes to be true. Another alternative might be that it's not a barn but just a barn facade. If this alternative obtained, then we can suppose that S would have the same experiences he's now having, and so he would still believe that there's a barn. Hence, S is not able to discriminate this alternative from what he believes to be true. If this alternative is a relevant one, then S won't count as knowing that there's a barn. However, if the only relevant alternatives are like the empty field alternative, then since S is able to discriminate them from what he believes to be true, his belief will count as knowledge.
So far, so good. Goldman's analysis captures the basic idea he wants to base his account of knowledge on. However, there are some problem cases that Goldman's first analysis delivers the wrong result for.
Suppose S sees a dachshund, and that S is able to reliably discriminate dachshunds from larger animals like German Shepherds and wolves. However, we suppose that S is not able to tell German Shepherds and wolves apart very well. Now suppose that there are lots of wolves running around, so the possibility that what S sees is a wolf is a relevant alternative to its being a dachshund. Intuitively, S still knows that what he sees is a dachshund, since he's able to reliably discriminate dachshunds from wolves. So far, so good. But now suppose that S looks at the dachshund and forms the true perceptual belief, "That's a dog." Does this belief count as knowledge? Intuitively, it would. The fact that S mistakes some dogs (German Shepherds) for wolves shouldn't show that S can't know that this dog (which looks nothing like a German Shepherd) is a dog. So the intuitively correct thing to say is that S knows that the creature he sees is a dog, even though there are some other dogs which he can't tell apart from wolves.
Goldman's first analysis doesn't deliver this result though. Goldman's first analysis tells us that S's belief "That's a dog"doesn't count as knowledge, since there's a relevant alternative--the possibility that it's a wolf--and if that alternative obtained, we can suppose that S would confuse the wolf for a German Shepherd and so still hold the belief "That's a dog." So according to the first analysis, S's doesn't know the dachshund he sees to be a dog.
What's gone wrong? If the wolf alternative obtained, S would still believe "That's a dog" but he'd believe it on the basis of different experiences. Perhaps that's the problem. The fact that S confuses wolves with some dogs, ones that look very different, ought not to show that S can't know that the dachshund he sees is a dog.
As things stand, Goldman requires there to be no alternative Q such that, if it obtained, S would continue to form the belief that P for any reason. That's what gets him in trouble with the dachshund case. If S were seeing a wolf instead, S would still believe "That's a dog" but he'd believe it on the basis of very different experiences. Perhaps Goldman should say instead that there is no alternative Q such that, if it obtained, S would continue to form the belief that P by the same method, or on the basis of the same sorts of experiences. If there are relevant alternatives to P in which S has the same belief and the same experiences, then S doesn't know that P.
This gives us Goldman's second analysis:
S perceptually knows that P iff:
1. P is true.
2. The state of affairs P causes S to have experiences E.
3. On the basis of those experiences E, S has a perceptual belief that P.
4. There is no alternative Q which is both relevant and which is such that, if Q obtained, S would still have a perceptual belief that P formed via the same method as actually produced his belief that P on the basis of experiences E.
Call a possibility "perceptually equivalent" to P for S just in case the experiences it would produce in S are exactly the same as the experiences that P produces. (Or, if the experiences differ, they do so in respects that are ignored by the mechanisms that produced S's belief that P.) With this notion, we can restate the last condition in Goldman's analysis as follows:
4. There is no alternative Q which is both relevant and which is "perceptually equivalent" to P for S.
Compare Goldman's summation of his account:
What our analysis says is that S has perceptual knowledge iff not only does his perceptual mechanism produce true beliefs, but there are no relevant counterfactual situations in which the same belief would be produced via an equivalent percept [experience] and in which the belief would be false. (p. 786)
This second analysis solves the dachshund problem. If the wolf-alternative obtained, S would still form the belief "That's a dog," but he wouldn't form it by the same method. He would form it on the basis of very different experiences. So we get the intuitively correct result: the relevance of the wolf-alternative doesn't prevent S from knowing the dachshund he sees to be a dog.
There are further problems of detail that Goldman considers on pp. 788-9, but we don't need to go into them here.
Because of its focus on what you can reliably discriminate rather than on what sorts of evidence you have, Goldman's view is very different from traditional accounts of knowledge.
On Goldman's view, whether your true beliefs count as knowledge doesn't depend on what evidence you're conscious of having, or on any other facts about your belief that are open to your introspective, self-reflective consciousness. It depends on how your beliefs were caused, and whether they were caused in a way that's reliable.
Call an account of some epistemological state internalist when it says that the presence or absence of the state depends on facts which are "internally available" to you, that is, knowable on the basis of introspection and reflection. Call an account of some epistemological state externalist when it says that the presence or absence of the state depends on facts which aren't "internally available" to you. There can be internalist or externalist accounts of knowledge, of justification, and of various related notions.
Now, everybody says that whether your beliefs count as knowledge depends in part on whether they're true; and whether your beliefs are true is something that might not be detectable just on the basis of introspection and reflection. So to that extent, everybody has an externalist theory of knowledge. But traditionally, philosophers have mostly thought that truth is the only externalist component of knowledge. They have assumed that all of the other features which go towards making true beliefs count as knowledge are "internally available."
Goldman's view rejects this traditional assumption. On Goldman's view, whether your true beliefs count as knowledge depends on facts about the beliefs (how they were caused, how reliable the mechanisms which caused them are, etc.) which aren't "internally available" to you. When people talk about externalist theories of knowledge, they're usually thinking of views like this one.
Suppose there's someone out there who's a psychological duplicate of you. He has all the same beliefs, thoughts, experiences, and memories as you have. Everything which you can tell about yourself on the basis of introspection and reflection, is also true of him (and like you, he can tell that it's true of him on the basis of introspection and reflection). Call such a person an internal epistemic duplicate of you.
On the traditional accounts of knowledge, if you had a belief which counted as knowledge, and your duplicate had the same belief, and his belief was also true, then his belief would have to count as knowledge too. On the traditional accounts, there can be no difference in epistemic status between internal epistemic duplicates, unless their beliefs differed in truth-value. The truth-value is the only external feature.
On Goldman's account, however, your true belief might count as knowledge, whereas your duplicate's true belief fails to count as knowledge, because his belief was not produced in the same reliable way that yours was. This is a difference between you and your duplicate, but it's not an "internally available" difference. So on this view, there are other external features besides the truth-value of your belief which can make a difference to whether you know.