Spring 2016, NYU Abu Dhabi


Does Knowledge Require Certainty?

In order to know P, how good does your justification for believing P have to be?

Unger's Argument

In "A Defense of Skepticism" (Philosophical Review 80 1971), my colleague Peter Unger argues as follows:
  1. If you know that p, then you have to be absolutely certain that p.
  2. For most propositions p that you believe, you're not absolutely certain that p.
  3. So for most of the propositions p that you believe, you don't know that p.

This first premise will come up many times in our discussion. Let's give it a name:

Certainty Principle. If you know that p, then you have to be absolutely certain that p.

"Certainty" can mean different things. To say that you're certain that p might mean that you're especially confident, that you have no lingering doubts about P running through your mind. Call this the psychological sense of "certainty." Alternatively, to say that you're certain that p might mean that you have really good evidence for p, evidence which is so good that there's no chance of your being wrong. It's not possible to believe that p on the basis of that kind of evidence and be mistaken. Call this the evidential sense of "certainty."

Unger intends to be using the psychological sense of "certainty" in his argument. He does this just to keep the argument simpler. He says that the epistemic sense of "certainty" is a normative notion--it has to do with how good your evidence is, and so with how confident you should be that P, not with how confident you actually are. Talk about shoulds is controversial, and Unger wants to keep his discussion as straightforward as he can. He does, though, think both:

Why does Unger think that we are, and should be, certain of hardly anything?

The answer is that he thinks that "being certain" is an absolute term, like "empty" and "flat." He thinks that emptiness requires a thing to have nothing in it whatsoever--however small. And he thinks that, in order to be flat, a thing must have no bumps or curves whatsoever--however small. If "certain" were an absolute term, too, then being certain would require having no doubts whatsoever.

Unger argues that if "flat" is an absolute term, then:

Necessarily, if x is flatter (or more near to being flat) than y, then that must mean that x has fewer bumps or curves than y, so y must have some bumps or curves; so strictly speaking, y is not really flat.
Similarly, if "being certain" is an absolute term, then:
Necessarily, if you are (or should be) more certain of p than you are of q, then that must mean that you have (or should have) fewer doubts about p than about q, so you must have (or should have) some doubts about q; so strictly speaking, you're not really certain of q.
Unger is of course willing to allow that y might be close enough to being flat for all practical purposes. Likewise, you might be close enough to certain of q for all practical purposes. But there's a big difference between what's strictly speaking true and what's it's acceptable to say or what's near enough to the truth for practical purposes. Here we're just concerned with what's strictly speaking true.

Unger thinks that for most propositions q, the proposition that you exist is, and should be, more certain for you than q. Hence, if he's right that "being certain" is an absolute term, then--since there is something which is more certain for you than q--it follows that, strictly speaking, you're not certain of q. And if knowledge requires absolute certainty, then you can't know that q.

Question:
Does it sound plausible to you to say that you're not certain of propositions like "There are automobiles"? Do you have any doubts about propositions like these? Do you have more doubts about propositions like these than you do about your own existence?

Do you think it's true that knowledge requires psychological certainty? What if you believe that P, but you have some doubts running through your mind--doubts you recognize to be irrational and baseless. Would that prevent you from knowing P? This is not clear to me.

Fallibilism

We say that you're fallible about a subject matter just in case you can make mistakes about that subject matter. If you can't make mistakes, then you're infallible.

A related notion is the notion of defeasibility: the evidence you have for believing that p is defeasible just in case it can be overturned or defeated as more evidence comes in. An example of indefeasible evidence might be a mathematical proof. Most other kinds of evidence are defeasible. For example, we have plenty of evidence that Mars is not made of coffee. But one can imagine a sequence of discoveries that would turn the tables, and make it reasonable to think that perhaps Mars is made of coffee, after all. I'm not saying we're going to get that evidence. It's extremely unlikely that that will happen. But it's still possible. So our evidence that Mars is not made of coffee is defeasible. It could be defeated or overturned by more evidence.

As we've seen, some people think that knowledge requires absolute certainty. These people will say that you can never know that p if your evidence for p is less than fully certain. If there's any chance that your evidence might later be defeated, then it won't be good enough to give you knowledge that p.

The Lottery Argument seems to confirm this claim that defeasible evidence can never be good enough for knowledge. It seems to show that no matter how good your evidence is, so long as it leaves open some possibility of your being wrong, you won't know. You may be very highly justified in believing that your ticket will lose, but you don't know it.

Other people think that it is sometimes possible to know things on the basis of defeasible evidence. These people are called fallibilists.

They think it can be enough if your evidence is pretty damn good, but not so good as to make you infallible. For instance, you have pretty good evidence that Mars is not made of coffee. You might be wrong. Your evidence is defeasible. But suppose you're not wrong. Your evidence is pretty damn good. The fallibilist will say that in this kind of situation you can count as knowing.

Different Kinds of Skeptical Argument

Sometimes people think that the debate about skepticism is just a debate about whether or not the Certainty Principle is true. But it's not that simple. If you accept the Certainty Principle, then it does looks like skepticism will follow, at least about a great many topics. Maybe there are some things that you're infallible about (e.g., whether 1+1=2, or whether you're thinking about monkeys). But not many. So the Certainty Principle does seem to support skepticism.

It's the reverse direction that's trickier.

Let's distinguish three kinds of epistemically desirable state:

  1. The most demanding state is having an absolutely certain, indefeasible proof that p.
    Some philosophers think that the word "knowledge" applies only to this state.

  2. A less demanding state would be whatever it is that fallibilists think constitutes knowledge that p. Being in this state doesn't require you to be absolutely certain or to have indefeasible evidence that p is true. Perhaps it just requires P to be very likely to be true, on the basis of your evidence.

  3. An even less demanding epistemically desirable state is having justified or reasonable belief that p.
    You might have plenty of evidence that p when p is in fact false, so being in this state does not even require p to be true. (We will discuss the difference betwee 2 and 3 more next class.)
Suppose that the skeptic comes shaking his Certainty Principle at us, and proclaiming that we don't have absolutely certain knowledge that P. So what? Won't we non-skeptics still have comfortable places to retreat to? Can't we say to the skeptic: "Okay, you win. We don't have knowledge, as you understand it. But we have some other, less-demanding, but still epistemically desirable states. Who cares what you call them."

And in particular, couldn't it still at least be reasonable for us to act on the assumption that P is true, even if we don't have what the skeptic calls "knowledge that P"? What would be really interesting--and really troubling--is if the skeptic had arguments that threatened our possession of such less-demanding states, too. Arguments that don't just fuss about our not having absolutely certain evidence.

The best, most interesting kinds of skeptical argument are of that sort. Some of them threaten to show that we can't even have justified beliefs about the world outside our heads. If they're right, then it's no more reasonable to believe that you're sitting down right now than it is to believe you're a brain in a vat.

This shows that we shouldn't think that the debate about skepticism is just a debate about whether the Certainty Principle is true. Even if we concede that we can't be certain of much about the outside world, there remain weaker--but still epistemically desirable--positions for us to aspire to. As we'll see during the course, some of them might arguably also deserve the name "knowledge." But as I said, who cares what we call them? On the other hand, even if we decided that certainty isn't a requirement for knowledge, we might not yet be in the clear. The most powerful skeptical arguments seem not to need anything as strong as the Certainty Principle. They purport to raise difficulties about our possessing even the weaker epistemic positions.