![]() Fall 2002 |
Theory of KnowledgeKnowledge, Belief, and Justification |
![]() |
Let's start to clarify a few issues about knowledge and other notions it's related to.
The kind of knowledge we'll be concerned with in this course is called factual or propositional knowledge. That's where you know something to be a fact. For instance, you know that Princeton is located in New Jersey.
There are some cases where we use the word "know" but we're not talking about factual knowledge. For instance:
I know my mother pretty well.
That's not the same--or at least, it's not obviously the same--as knowing any particular fact about my mother. Another example:
I know how to walk.
That too is not the same--or at least, it's not obviously the same--as knowing any fact. There might be this really smart baby who reads all the books about the physiology and mechanics of walking, so he knows all the relevant facts. But he might not yet know how to walk.
In this class, we're just going to focus on factual knowledge, the kind of knowledge that's involved when you know that something is the case.
Most philosophers assume that in order to know that something is the case, you have to believe it to be the case, plus satisfy some extra conditions.
Now this has been disputed.
One kind of objection is the following.
A schoolboy is taking a quiz. One question reads "When was the Battle of Hastings?" He remembers studying about Hastings and some battle, but he has no idea when it happened. But "1066" looks good, so he chooses that. And so on for the rest of the quiz. As it turns out, he gets a score of 95% on the test. He knew more than he thought.
Some philosophers would describe this case like this: "The boy knew what the right answers were, he just didn't believe them." If they're right, then this is a case of knowledge without belief.
Other philosophers would say "He knew what the right answers were, all right. But he also believed that they were the right answers. That's why he chose them. What he didn't have was knowledge that he knew and believed them. (That's why it felt to him as though he were guessing.)"
Still other philosophers would deny that the schoolboy knew the right answers at all. The answers might have "been there" in the back of his brain. But in order to know that they're the right answers, you need to have more confidence in them than the schoolboy had--and you need to be aware of some good reasons for thinking they are the right answers. The schoolboy lacked that altogether.
I'm not going to say much about the first account of the schoolboy. I'm just going to assume, along with the second and third account, that knowing that P always involves believing that P. We'll be spending a lot more time discussing the debate between the second and third accounts as the course proceeds.
Another reason that some people are reluctant to say that knowledge involves belief is that they think "I believe that P" sounds so weak, too weak to be combined with knowledge. Someone might want to say, "I don't believe my name is 'Gretchen,' I know it is."
This brings up an interesting contrast that will play an important role in our discussion of the skeptic. This is the contrast between what a speaker implies by saying something and what her words really mean.
We can see this contrast at work in cases like the following: I'm writing a letter of recommendation for Gretchen and I write "Gretchen has good handwriting and she's always prompt to class." By writing this, I've implied that she isn't very good at philosophy (or at least, that I don't think she is). But that's not what my words mean. My words just mean that she has good handwriting; and there's no incompatibility between having good handwriting and being a good philosopher. It's just that whoever reads my letter will naturally reason, "Well, if this teacher thought that Gretchen was a good philosopher, then you'd expect him to say so in the letter. But he didn't. So the best explanation is probably that he doesn't think she's a good philosopher."
Here's another case. My girlfriend asks me "Do you love me?" and I don't say anything in reply. Now that will make her mad, because she'll conclude that I don't love her. And perhaps by not saying anything I implied that I don't love her. But I didn't say that I don't love her. I didn't say anything at all.
Another case. My girlfriend dumps me, so you're taking me out to a party to cheer me up. As we're driving to the party, we notice that you're almost out of gas. I say "There's a gas station around the corner." Now you'd very naturally take me to be implying that the gas station is open, or at least, that I believe that it's open. But I didn't say it was open. That's not part of what it means for there to be a gas station around the corner. The gas station around the corner might have been closed for 5 years. And I might know that, too.
Now, in most cases, if your words carry an implication that goes beyond what they really mean, it's possible for you to cancel that implication, e.g., by elaborating. So when we notice that you're running out of gas, I might say to you, "There's a gas station around the corner--but I'm not saying that it's open." By saying this I'm not being very helpful. But I'm not contradicting myself, either.
What does all this have to do with knowledge? Well, when Gretchen is tempted to say, "I don't believe my name is 'Gretchen,' I know it is," I think she's responding to a phenomenon of the sort we've been discussing. If she said "I believe my name is 'Gretchen'" she would imply that she wasn't sure, or that there was some doubt about the matter. People would expect that if she knew what her name was (or even thought she knew), she'd say so. But she does know. She doesn't have any doubts about her name. So she doesn't want to imply that she's unsure; that's why she insists "I know it."
OK, perhaps that's right. Perhaps when Gretchen says "I believe my name is 'Gretchen'" she does imply that she doesn't know what her name is. But is that part of what it means to believe something, that you don't also know it? Or is this just something that people would naturally take you to be implying, if you said "I believe..."? It seems more like the second.
Consider this case. My girlfriend goes and gets married to Tom. Now I think that she might be cheating on Tom. In fact, though, I'm wrong. She's completely faithful to Tom, and we can even suppose that he knows that she's faithful. Now suppose I'm talking to you and I want to tell you about Tom. I say, "Tom believes his wife is faithful." I don't want to say he knows it, because I'm not sure whether he does know it. Now we're imagining that Tom does know that his wife is faithful. So when I said he believed she's faithful, is what I said false? No, it doesn't seem to be false. It seems to be true. It's not the whole truth, but it's part of the truth.
I think we should say the same thing about Gretchen's knowing what her name is. When she knows that her name is "Gretchen," she also believes that her name is "Gretchen." It's just that, she doesn't just believe it, she also knows it. If she were to say "I believe that my name is 'Gretchen'," we'd take her to be implying that she didn't know for sure. But there wouldn't be any contradiction if she went on to say, "...and what's more, I don't just believe it, I also know it."
So there doesn't really seem to be any tension between what it means to believe something, and having knowledge of the same thing. So there's no obstacle here to the idea that knowing that P always involves believing that P.
To know that P, is it enough to believe that P? No, it seems like you also need to have some good reason for your belief. If you were just guessing, we wouldn't count that as knowledge. (Even if you happened to guess right. Then you'd just be lucky. You wouldn't know.)
So to know P requires more than just believing P. (And more than just believing P and happening to be right.) It also requires you to have good reasons for believing P, or evidence in favor of P, or some justification for believing P. (We will treat all these notions as synonymous.)
At the start of class, I said that epistemology is the study of the questions "What is knowledge?" and "Do I have any?" Well, because there seems to be some close connection between knowledge and reasons (or evidence or justification), epistemologists also pay a lot of attention to the questions "What is it reasonable for me to believe?" and "How exactly is knowledge related to reasonable or justified belief?"
In fact, many epistemologists--myself included--think that justification is a more fundamental and interesting notion for epistemologists to study than knowledge is. (Other epistemologists think that, in fact, knowledge is the more fundamental property.)
As we'll see this term, it is controversial just what "justification" or "reasonable belief" amount to. It is also controversial what the connections are between knowledge and justified belief.
Let's try to disentangle the notion of justification a bit.
When we say that you have justification for believing P, we don't necessarily mean that you'll be able to stand up in court and present a good argument for P. You can imagine a case where you have very good evidence for believing P, and your roommate the lawyer has much poorer evidence for not-P, but your roommate is able to out-argue you. So there's a difference between what you have justification or evidence for believing, and what your abilities as a debater are. There's a difference between:
(i) the epistemic status of being justified, or having justification for believing something
and
(ii) the activity of defending or giving a justifying argument for a claim
It's tricky to keep these apart, since when we use words like "justified" and "justification," we're sometimes talking about (i) and other times talking about (ii). For instance, if someone asks you if you can justify your belief, they're talking about (ii). If they ask whether you have a justification for your belief, they may also be talking about (ii). That is, they may be asking whether you have some justifying argument that you could present to defend your belief against criticism. Or, they may be talking about (i). They may just be asking whether you have good reasons for your belief--regardless of whether you can say what those reasons are or convince your critics.
In my view, the epistemological status of having justification is more important, and it does not depend on your being able to engage in the activity of defending or justifying your belief. It can be reasonable for you to believe something even if you're not able to prove that it's reasonable, or explain what makes it reasonable. As the epistemologist Robert Audi says:
It would seem that just as a little child can be of good character even if unable to defend its character against attack, one can have a justified belief even if, in response to someone who doubts this, one could not show that one does.
So in this class we'll mostly be concentrating on that epistemological status. But the activity of justifying your belief will also come up from time to time in our discussion. We haven't yet gotten far enough into epistemology for you to have a clear perspective on how these two things are connected. So just file this distinction away in the back of your mind. We'll come back to it later.
The next issue to think about is the connection between having justification and having a belief. You can have justification for believing things that you don't in fact believe. For instance, suppose I go up to my ex-girlfriend and I tell her I can't live without her, she should leave her husband and come back to me. She laughs in my face and tells me she doesn't care about me anymore, she's totally in love with her husband Tom. Now at this point I have very good reasons for believing that my girlfriend is not in love with me. Even so, you can imagine that I might still refuse to believe it. I might think, "She still loves me, she just wants me to be jealous." If I thought that, I would be unreasonable. But I could still think it. So just because you have justification for believing something, it doesn't follow that you do believe it.
For the time being, though, let's not worry about that. To keep things simple, let's pretend that you believe whatever you have good reason to believe. At least for now.
Now, even if you have justification for believing P, and you do believe that P, it doesn't follow that your belief is epistemically respectable.
For instance, continuing with the saga of me and my ex-girlfriend, suppose I go to my sister's house. She has a Magic 8 ball. I ask it "Does my girlfriend love me?" and I shake it, and the answer comes back "Definitely not." Now I didn't believe my girlfriend, but I do trust the Magic 8 ball. So now I believe that my girlfriend doesn't love me anymore.
Now we wouldn't say that my belief in this case is epistemically respectable. There seems to be something defective about it. Yet we said that I do have good reasons for believing that my girlfriend doesn't love me anymore. After all, she told me so, plus she's gone and gotten married to someone else. So I have good reasons for believing it, and I do believe it. So what more do you want?
What seems bad in my example is that, although I have good reasons for believing P, those aren't the reasons why I believe P. I also have some bad reasons for believing P, and in the example I believe P for the bad reasons, instead of for the good ones.
Philosophers sometimes describe this phenomenon by saying that I have justification for believing P, and I believe that P, but my belief is not a justified belief. In order to have a justified belief that P, it takes more than just believing P and having good reasons for believing P. You also have to believe P for those reasons.
I think it's confusing to use the word "justification" and "justified" in all these different ways. So I'll talk this way instead. I'll say that in my example, I have justification for believing P, and I believe that P, but my belief is not a well-founded belief, since it's based on bad reasons instead of good ones. (I take this term from Feldman.) When you have a belief that's not well-founded, we'll say it's ill-founded.
Philosophers disagree about which is more basic: the notion of having justification for believing P, or the notion of having a well-founded belief that P. I think the first notion is more basic. But at the moment, the important thing is just to recognize that there are these two different notions. We can leave the question of which is more basic for another day.
So, pulling this all together: When I went to my girlfriend and asked her to come back to me and she laughed, I had justification for believing that she didn't love me anymore, but I didn't yet believe it. After I shook the Magic 8 ball, then I did believe she didn't love me anymore, and I still had justification for believing this (because she laughed etc.). But my belief was ill-founded. It was based on bad reasons rather than on good reasons.
This distinction too is one that will come up from time to time in our discussion. For the time being, though, we want to keep matters simple. So we'll start off by pretending that whenever you have good reasons to believe P, you actually do believe P, and what's more you believe it for those good reasons.
When we talk about "reasons for believing P," there's an important ambiguity there.
(i) One kind of reason for believing something is a practical or instrumental reason. You have this kind of reason when there would be a good "pay-off" to having the belief. For instance, if Tom believes that his wife is being faithful to him, that has a good pay-off, because it makes him happy. Pascal argued that it's rational to believe in God because the expected pay-off of having that belief is so high. William James discusses a case where a climber gets stuck in the Alps and he has to jump across a crevasse. Suppose it's more likely that the climber will make it if he believes that he'll make it; and suppose the climber knows this. Then, James argues, what it's rational for the climber to do is to form the belief that he will succeed. Having this belief will make his life better off.
All of these examples have to do with subject's practical reasons for believing things.
(ii) The other kind of reason for believing something is what we'll call an epistemic reason. This consists of things like good evidence, evidence that makes your belief likely to be true. Pascal's and James' arguments do not show that we have good epistemic reasons to believe in God, or that the climber has good epistemic reasons to believe he can successfully jump the crevasse. Their arguments don't do anything to show that those beliefs are likely to be true. (Although the climber is more likely to jump the crevasse if he believes he can, that does not mean that he is very likely to do so. Perhaps believing that he will succeed only raises his chances from 5% to 10%.)
The connections between practical reasons and epistemic reasons are philosophically very interesting, and epistemologists do investigate those connections. But most often, when epistemologists talk about "justification," they're talking about the epistemic notion, not the practical one. For instance, I said before that knowing that P requires you to have some justification for believing that P. What I meant was: it requires you to have some epistemic justification for believing P. Even if Pascal's belief in God and the mountain climber's belief that he can jump the crevasse are reasonable in one sense (they are better off, or practically justified in having those beliefs), that does nothing to show that their beliefs count as knowledge. Pascal's argument does not enable him to know that God exists. The mountain climber's reasoning does not enable him to know that he can successfully jump the crevasse. It seems like you can only know that P if your belief that P is supported by good evidence, that makes it likely to be true. (Of course, as I said, the connections between knowledge and justification are controversial. We will be discussing those controversies as the term progresses.)
So in this class, we'll be focusing on reasons and justification of the epistemic sort, rather than the practical sort.
Now, there are different things that we might call "your epistemic conduct." One sort of conduct has to do with your behavior as an inquirer. This is a matter of how you go about gathering evidence, testing hypotheses, scrutinizing your assumptions, and so on. Basically, it's matter of how you go about changing your epistemic situation. The second sort of conduct has to do with what beliefs you should form, given that you're in a particular epistemic situation. We can call this a matter of your here-and-now doxastic choices. ("Doxastic" comes from the Greek word "doxa," for belief.) Epistemology might tell you that the right thing to believe, given a certain body of evidence, is that P. Or it might tell you that the right thing to believe is that not-P. In some cases, it might tell you that the right thing to do is to believe neither, but rather to suspend judgment, until you get more evidence.
The connection between the two kinds of epistemic conduct is complicated. For instance, suppose that you've been a really crummy investigator. You overlooked some pieces of obvious evidence, you forgot about others, you were sloppy in your experiments, and you took a lot of things for granted. If so, then there's a clear sense in which your current situation is an epistemically bad one. But, even if you're to be censured for having such poor evidence right now, still it is evidence, and there's the question of what you should believe, given that evidence.
Rich Feldman has a nice example to illustrate this difference. It goes like this. Teresa is supposed to be investigating mitochondrial RNA, but instead of going to the library to do research, she just goes to a cafe and hangs out. It turns out she's sitting next to the world's expert on mitochondrial RNA, and he's explaining the very phenomenon that Teresa is interested in. She listens in on his conversation and thereby acquires all the information she needs. In this example, Teresa has been irresponsible in how she gathered evidence, but the beliefs she forms on the evidence she has may be highly justified.
In general, how good you are as an inquirer will be one question, and how good you are at assessing your current evidence will be another. You might be excellent at determining what your evidence supports, even if you're a crummy investigator. (You might be like Sherlock Holmes' brother, Mycroft. He's supposed to be even smarter than Sherlock is; he's just too lazy to leave his rooms or his club to do any research. But if Sherlock brings him the evidence, then he's super-sharp at assessing and interpreting it.)
We expect epistemology to speak to both kinds of conduct. We expect it to tell us how to behave as inquirers, and also to tell us what kinds of here-and-now doxastic choices are the right ones to make, in any given epistemic situation.
Notice that when epistemology is telling us how to behave as inquirers, this is a practical matter: a matter of how we should act, if we want to get into a better epistemic situation. On the other hand, when epistemology is telling us what are the right doxastic choices to make, given that we're in a particular epistemic situation, this doesn't seem to be a practical matter. It seems to be more a matter of which beliefs are supported by the evidence that's available in that situation.
We said that sometimes you can be in a situation where your evidence supports P, but it would be possible for new evidence to come in and turn the tables, making not-P more likely to be true than P. When this is possible, we say that your evidence for believing P is defeasible. It's possible for new evidence to come in and "defeat" it.
Evidence can be defeated in two different ways. Suppose you read in the Science section of the NY Times that aluminum in the bloodstream causes Alzheimer's Disease. That's your original evidence. It justifies you in believing P: that aluminum causes Alzheimer's Disease. Now one way your evidence can be defeated is this. You meet the world's foremost researcher on Alzheimer's Disease, and she tells you that there's no causal link between aluminum and Alzheimer's. So the evidence you get by talking to her supports not-P. Now, she's a better authority than the Science section of the NY Times. So when you put all your evidence together, all things considered it comes out somewhat in favor of not-P. This is a case where your new evidence defeats your old evidence by outweighing it. (Sometimes philosophers say "overriding" instead of outweighing.)
Here's a different sort of case. Suppose that you didn't meet the Alzheimer's researcher. Instead you just go to a cocktail party. While you're at the party, you meet this reporter from the NY Times. He gets pretty drunk and confesses that he made up that story about Alzheimer's and aluminum. This too would defeat your original evidence for believing P. But in this case, you didn't acquire any evidence that there's no causal link between aluminum and Alzheimer's. There might be one, or there might not. Your new evidence just tells you that the NY Times story isn't reliable evidence about that question. This is a case where your new evidence defeats your old evidence by undercutting it, or showing it to be unreliable. (Sometimes philosophers say "undermining" instead of undercutting.)
In both cases you start out with evidence that justifies you in believing P, but then you get defeating evidence. And when you put all the evidence together, then all things considered, you're no longer justified in believing P. Of course, you might go on to gather more evidence in support of P. Or you might acquire evidence that defeats your defeating evidence. For instance, you might discover that the woman you thought was an Alzheimer's researcher was really a crazy actress. Or you might discover that the NY Times reporter doesn't really write stories for the Science section, he just likes to pretend he does. If you acquired that evidence, then it seems that, all things considered, you'd once again be justified in believing P.
A crucial question for us during this course will be whether evidence of this sort--defeasible evidence--can ever be good enough to enable you to know that P.
The Lottery Argument seems to show that it can't. It seems to show us that no matter how good your evidence is, so long as it leaves open any possibility of your being wrong, then you won't count as having knowledge. You may be justified in believing that your ticket will lose, but you can't know it.
The fallibilist, on the other hand, is a philosopher who thinks that defeasible evidence can sometimes be good enough for knowledge. For example, it seems like our astronomical evidence is good enough to enable us to know that the moon is not made of blue cheese. But our evidence there is defeasible. You can imagine a sequence of discoveries that give us evidence that makes it, all things considered, reasonable for us to believe that the moon is made of blue cheese, after all. I'm not saying we're going to get that evidence. It's extremely unlikely that that will happen. But it's still possible. So our evidence whether the moon is made of blue cheese is defeasible. Yet it still seems good enough to enable us to know that the moon is not made of blue cheese.
A characteristic people often attribute to knowledge is this:
Certainty Principle. In order to know that P, you have to be absolutely certain that P.
Now this could mean several different things.
One thing you might mean by "being certain that P" is being especially confident about P, having no lingering doubts about P running through your mind. We can call this "psychological certainty." The fallibilist can allow that knowledge requires certainty in this sense. This doesn't say anything about whether your evidence is defeasible or not. People can be extremely confident about things, and yet turn out to be wrong.
Another thing you might mean by "being certain that P" is having really good evidence for P, evidence which is so good that there is no possibility of being wrong. This would be indefeasible evidence for P. It would be impossible for it to be defeated or overturned. If we interpret the Certainty Principle in this way, we can put it as follows:
Infallibility Principle. In order to know that P, you have to have evidence that guarantees that P is true, or makes you infallible about P.
To say that you're "infallible" about a topic means that you can't make mistakes about that topic. An example of evidence that's this strong might be a mathematical proof.
It is controversial whether knowledge requires certainty in this sense. Some philosophers think that it does. They reason, "Of course we can't know anything about the external world on the basis of perception. Our perceptual beliefs are all fallible and defeasible. They could be mistaken. But knowledge requires infallibility and absolute certainty. So our senses can't give us any knowledge."
The fallibilist, on the other hand, thinks that it's possible sometimes to know things even when our evidence isn't quite that good. It can be enough if our evidence is pretty damn good, but not so good as to make us infallible. For instance, you have pretty good evidence that there is a sun, and that it will be in the sky tomorrow. Your evidence isn't so good as to make you infallible; it'd be possible for you to be wrong. But your evidence is pretty damn good. The fallibilist will say that it's good enough to give you knowledge.
We will be looking at this debate closely as the course proceeds.
Sometimes people think that the debate about skepticism is just a debate about whether or not the Infallibility Principle is true. But it's not that simple. If you accept the Infallibility Principle, then it does looks like skepticism will follow, at least about a great many topics. Maybe there are some things that you're infallible about (e.g., whether 1+1=2, or whether you're thinking about monkeys). But not many. So the Infallibility Principle does seem to support skepticism.
It's the reverse direction that's harder. Does the skeptic need the Infallibility Principle, for his argument to work? That's not so clear. Even if we show that the Infallibility Principle is false, we might not yet be in the clear. There may still be serious skeptical arguments for us to think about, that don't assume anything as strong as the Infallibility Principle. We'll be talking about this in coming weeks.
We've covered a lot of distinctions and terminology in the past few classes. Here's a list of the important ideas we've covered.
[Theory of Knowledge] [Syllabus] [Notes and Handouts] [James Pryor] [Philosophy Links] [Philosophy Dept.]