In order to know P, how good does your justification for believing P have to be?

Unger's Argument

Unger argues as follows:
  1. If you know that p, then you have to be absolutely certain that p.
  2. For most propositions p that you believe, you're not absolutely certain that p.
  3. So for most of the propositions p that you believe, you don't know that p.

This first premise will come up many times in our discussion. Let's give it a name:

Certainty Principle. If you know that p, then you have to be absolutely certain that p.

"Certainty" can mean different things. To say that you're certain that p might mean that you're especially confident, that you have no lingering doubts about P running through your mind. Call this the psychological sense of "certainty." Alternatively, to say that you're certain that p might mean that you have really good evidence for p, evidence which is so good that there's no chance of your being wrong. It's not possible to believe that p on the basis of that kind of evidence and be mistaken. Call this the evidential sense of "certainty."

Unger intends to be using the psychological sense of "certainty" in his argument. He does this just to keep the argument simpler. He says that the epistemic sense of "certainty" is a normative notion--it has to do with how good your evidence is, and so with how confident you should be that P, not with how confident you actually are. Talk about shoulds is controversial, and Unger wants to keep his discussion as straightforward as he can. He does, though, think both:

Why does Unger think that we are, and should be, certain of hardly anything?

The answer is that he thinks that "being certain" is an absolute term, like "empty" and "flat." He thinks that emptiness requires a thing to have nothing in it whatsoever--however small. And he thinks that, in order to be flat, a thing must have no bumps or curves whatsoever--however small. If "certain" were an absolute term, too, then being certain would require having no doubts whatsoever.

Unger argues that if "flat" is an absolute term, then:

Necessarily, if x is flatter (or more near to being flat) than y, then that must mean that x has fewer bumps or curves than y, so y must have some bumps or curves; so strictly speaking, y is not really flat.
Similarly, if "being certain" is an absolute term, then:
Necessarily, if you are (or should be) more certain of p than you are of q, then that must mean that you have (or should have) fewer doubts about p than about q, so you must have (or should have) some doubts about q; so strictly speaking, you're not really certain of q.
Unger is of course willing to allow that y might be close enough to being flat for all practical purposes. Likewise, you might be close enough to certain of q for all practical purposes. But there's a big difference between what's strictly speaking true and what's it's acceptable to say or what's near enough to the truth for practical purposes. Here we're just concerned with what's strictly speaking true.

Unger thinks that for most propositions q, the proposition that you exist is, and should be, more certain for you than q. Hence, if he's right that "being certain" is an absolute term, then--since there is something which is more certain for you than q--it follows that, strictly speaking, you're not certain of q. And if knowledge requires absolute certainty, then you can't know that q.

Question:
Does it sound plausible to you to say that you're not certain of propositions like "There are automobiles"? Do you have any doubts about propositions like these? Do you have more doubts about propositions like these than you do about your own existence?

Do you think it's true that knowledge requires certainty? What if you believe that P, but you have some doubts running through your mind--doubts you recognize to be irrational and baseless. Would that prevent you from knowing P?

Fallibilism

We say that you're fallible about a subject matter just in case you can make mistakes about that subject matter. If you can't make mistakes, then you're infallible.

A related notion is the notion of defeasibility: the evidence you have for believing that p is defeasible just in case it can be overturned or defeated as more evidence comes in. An example of indefeasible evidence would be a mathematical proof. Most other kinds of evidence are defeasible. For example, we have plenty of evidence that Mars is not made of coffee. But one can imagine a sequence of discoveries that would turn the tables, and make it reasonable to think that perhaps Mars is made of coffee, after all. I'm not saying we're going to get that evidence. It's extremely unlikely that that will happen. But it's still possible. So our evidence that Mars is not made of coffee is defeasible. It could be defeated or overturned by more evidence.

As we've seen, some people think that knowledge requires absolute certainty. These people will say that you can never know that p if your evidence for p is less than fully certain. If there's any chance that your evidence might later be defeated, then it won't be good enough to give you knowledge that p.

The Lottery Argument seems to confirm this claim that defeasible evidence can never be good enough for knowledge. It seems to show that no matter how good your evidence is, so long as it leaves open some possibility of your being wrong, you won't know. You may be very highly justified in believing that your ticket will lose, but you don't know it.

Other people think that it is sometimes possible to know things on the basis of defeasible evidence. These people are called fallibilists.

They think it can be enough if your evidence is pretty damn good, but not so good as to make you infallible. For instance, you have pretty good evidence that Mars is not made of coffee. You might be wrong. Your evidence is defeasible. But suppose you're not wrong. Your evidence is pretty damn good. The fallibilist will say that in this kind of situation you can count as knowing.

Let's get clear about one thing. Earlier we were discussing the claim that:

Knowledge is factive: that is, if you know that P then P has to be true.

That claim, by itself, is not enough to settle our current dispute about the Certainty Principle. The claim that knowledge is factive does not entail that:

Knowledge has to be based on indefeasible, absolutely certain evidence.

The fallibilist agrees that knowledge is factive. On his view, you can know P on the basis of fallible evidence, but only if P is also true. If there are other people who believe things on the basis of the same kinds of fallible evidence as you, but their beliefs are false, then their beliefs won't count as knowledge on anybody's view.

This point often confuses students, so make sure that you've thought it through and understood it.

The fallibilist says: to know P, you need to have good evidence for P, and in addition, P has to be true. (The evidence by itself usually won't guarantee that P is true.)

The Certainty Principle, on the other hand, says: to know P, your evidence has to be maximally good. It has to be so good that no one could have that evidence without P's being true.

Note:
Sometimes students think, "According to the fallibilist, if Mars really isn't made of coffee, then your fallible evidence will be good enough to give you knowledge that it isn't. But it won't be good enough for you to know that you know." I guess they're thinking that knowing that you know requires you to be absolutely certain that you know, and hence, absolutely certain that Mars isn't made of coffee. But it's not clear why that should be so. If we accept the fallibilist's view that you can know things about Mars on the basis of less-than-certain evidence, why shouldn't we also be able to know whether we have knowledge, on the basis of less-than-certain evidence?

We'll talk more about this later.

Different Kinds of Skeptical Argument

Sometimes people think that the debate about skepticism is just a debate about whether or not the Certainty Principle is true. But it's not that simple. If you accept the Certainty Principle, then it does looks like skepticism will follow, at least about a great many topics. Maybe there are some things that you're infallible about (e.g., whether 1+1=2, or whether you're thinking about monkeys). But not many. So the Certainty Principle does seem to support skepticism.

It's the reverse direction that's trickier.

Let's distinguish three kinds of epistemically desirable state:

  1. The most demanding state is having an absolutely certain, indefeasible proof that p.
    Some philosophers think that the word "knowledge" applies only to this state.

  2. A less demanding state would be whatever it is that fallibilists think constitutes knowledge that p.
    This state would have to be a factive state, if it's to be a candidate to be knowledge. So being in this state requires that p in fact be true. But it doesn't require you to be absolute certain or to have indefeasible evidence that p is true.

  3. The least demanding epistemically desirable state is having justified or reasonable belief that p.
    You might have plenty of evidence that p when p is in fact false, so being in this state does not require p to be true.
Now, suppose that the skeptic comes shaking his Certainty Principle at us, and proclaiming that we don't have absolutely certain knowledge that P. So what? Won't we non-skeptics still have comfortable places to retreat to? Can't we say to the skeptic: "Okay, you win. We don't have knowledge, as you understand it. But we can have these other, less-demanding, epistemically desirable states. Who cares what you call them."

What would be really interesting--and really troubling--is if the skeptic had arguments that threatened our possession of these less-demanding states, too. Arguments that don't just fuss about our not having absolutely certain evidence.

The best, most interesting kinds of skeptical argument are of that sort. Some of them threaten to show that we can't even have justified beliefs about the world outside our heads. If they're right, then it's no more reasonable to believe that you're sitting down right now than it is to believe you're a brain in a vat.

This shows that we shouldn't think that the debate about skepticism is just a debate about whether the Certainty Principle is true. Even if we concede that we can't be certain of much about the outside world, there remain weaker--but still epistemically desirable--positions for us to aspire to. Some of them might deserve the name "knowledge." On the other hand, even if we decided that certainty isn't a requirement for knowledge, we might not yet be in the clear. The most powerful skeptical arguments don't assume anything as strong as the Certainty Principle. They purport to raise difficulties about our possessing even the weaker epistemic positions.

[Theory of Knowledge] [Syllabus] [Notes and Handouts] [James Pryor] [Philosophy Links] [Philosophy Dept.]


URL: http://www.princeton.edu/~jimpryor/courses/epist/notes/certainty.html
Last updated: 12:21 PM Mon, Feb 9, 2004
Created and copyrighted by: James Pryor