This page is going to explore a special class of relations called consequence relations, and also some related notions.
Standardly a consequence relation is understood to be a relation between a set of linguistic expressions called sentences or formulas, and a further single sentence or formula. Where P₁, P₂, …
and Q
are formulas, this is symbolized like this:
P₁, P₂, … ⊨ Q
That is pronounced as “Q
is a consequence of, or entailed by, premises P₁, P₂, …
” — for the consequence relation you’re writing as ⊨
. Some inquiries deal with multiple consequence relations, and then they’ll be written with different symbols, perhaps by affixing subscripts to ⊨
.
Sometimes authors qualify the notions of consequence or entailment being described here with the adjective “logical.” That leaves way for them to also talk about other, non-logical consequence relations. The idea behind calling a consequence relation a “logical” one is that it holds just in virtue of the logical structure or form of the premises and conclusion, not their specific meanings. So it can be defined in terms of a “logical semantics,” that allows the specific meanings of various expressions to vary in arbitrary ways. For example, normally we’ll understand P ∨ R
to be a consequence of P
, no matter what P
and R
happen to mean, or what their truth value happens to be. Similarly, “Something is prowling” is a consequence of “A cat is prowling,” no matter what “prowling” and “cat” happen to mean. (A “logical semantics” contrasts to an “empirically correct semantics,” where we are aiming to identify the actual meaning of expressions in a language people use.)
When a formula Q
is a (logical) consequence of zero premises, then Q
is called a (logical) theorem or a (logically) valid formula. For example, “Something is prowling or nothing is prowling” is a valid sentence, no matter what “prowling” happens to mean.
When a formula Q
is such that every formula is a consequence of it, then Q
is called a (logical) contradiction. (This notion can be defined in other ways, too, but generally the definitions will boil down to the same thing.) For example, “Something is prowling and nothing is prowling” is a contradiction.
The symbol ⊨
is called the “double turnstile.”
There’s another kind relation between sets of formulas, and a further single formula, that’s written using a “single turnstile” ⊢
. The claim:
P₁, P₂, … ⊢ Q
means that Q
can be derived in some formal proof system from premises P₁, P₂, …
. A proof system is a set of mechanical procedures for finding out whether certain relationships obtain.
Often there are a variety of proof systems to be considered, so it will have to be understood which one you mean when using ⊢
.
Technically speaking, consequence and derivability in a proof system are different notions, but often you’ll be dealing with proof systems where it has already been established that ⊨
and ⊢
coincide, that is, that a formula is a consequence of some premises (in the sense you’re interested in) iff it can be derived from those premises in the proof system in question. This isn’t always easy to establish (and sometimes it isn’t even possible), but often you’ll be in situations where you know that hard work has already been done. And in those cases you can move freely between ⊨
and ⊢
.
In fact, in some texts, authors will use ⊢
to express claims that are about consequence rather than derivability in a formal proof system, and so strictly speaking ought to have been expressed with ⊨
instead.
For both of these symbols, you’ll sometimes see them written with a list of formulas on the left-hand side, as we did above. Other times you’ll see them written with a term designating a set on the left-hand side, like this:
Δ ⊢ Q
That means that Δ
is a set of formulas, and that when all of the formulas in Δ
are allowed as premises, that Q
can be derived. You can also see them written with more than one set-designating term on the left-hand side, like this:
Δ, Γ ⊢ Q
That means that when all of the formulas in Δ
or in Γ
are allowed as premises, that Q
can be derived. So it could also be written, more verbosely, as:
Δ ∪ Γ ⊢ Q
Sometimes you’ll see them written with a mix of set-designating terms and single formulas, like this:
Δ, P, R ⊢ Q
That means the same as could be written, more verbosely, as:
Δ ∪ {P, R} ⊢ Q
Sometimes you’ll see these symbols written with nothing on the left-hand side, like this:
⊢ Q
That means that Q
can be derived from zero premises. Similarly:
⊨ Q
means that Q
is a consequence of zero premises. (That doesn’t mean that there are no premises that Q
is a consequence of! As we’ll see, for many consequence relations, including the ones people call “logical,” if something is a consequence of zero premises, it’s also a consequence of arbitrary additional premises too.) These claims could be written, more verbosely, as:
∅ ⊢ Q
∅ ⊨ Q
Given the kind of “mechanical procedure” that a proof is understood to be, a derivation in a proof system always has to be finite, so the set of premises on the left-hand side of ⊢
, however designated, also has to be finite. For consequence relations, on the other hand, these can also hold between sets of infinitely many premises and a conclusion.
In our readings, you will sometimes see expressions like these, where Δ
is a set-designating term and P
is a formula:
Cn(Δ)
Cn(P)
These mean the set of formulas that are logical consequences of Δ
or {P}
.
Another thing you might see is the double turnstile being used in a different way. This looks something like this:
ℳ ⊨ Q
or:
ℳ, w ⊨ Q
where ℳ
is something called a “model” — this is a notion used in logical semantics. When additional expressions like w
or so on appear alongside it, these are further parameters that might combine with the model. What these kinds of claims mean is that the particular model called ℳ
, together with any relevant parameters, is one that “makes Q
true” or “satisfies Q
.” There are also other ways to symbolize this. I would write such claims instead like this:
⟦Q⟧_{ℳ} = true ⟦Q⟧_{ℳ w} = true
The same authors who use ⊨
in this way might also, even in the same texts, use ⊨
in the different way explained above. They rely on the context to disambiguate.
The turnstiles we’ve been talking about so far are used to make claims about a given object language: that some formulas or sentences in that language are consequences of, or derivable from, others. They are not themselves expressions in the language we’re talking about.
As you know, many languages are ones where there are also expressions in the language that express “if…then…” relationships between sentences. For example, one sentence in our language might be:
A cat is prowling ⊃ Something is prowling
where ⊃
expresses what’s called the “material conditional,” that you studied in your introductory logic class. This conditional is symbolized in various ways, sometimes for example as →
, but the ⊃
symbol (pronounced “horseshoe”) is only ever used to mean this, whereas →
is sometimes used to mean other things instead. So I prefer to use ⊃
here.
Another expression inside a language that you may see — though this is much more specialized and uncommon — expresses what’s called a “strict conditional” and is associated with C.I. Lewis. This looks like this:
P ⥽ Q
That means the same as could also be expressed like this:
□(P ⊃ Q)
for some interpretation of □
(as we’ll see in upcoming classes, there can be many).
Lewis introduced his notion of a strict conditional because he thought that ⊃
did a bad job of capturing the meaning of “if…then…” and other conditional constructions in ordinary language. That’s true, but his notion turned out not to do such a great job either.
It’s not obvious that there’s even just one kind of thing we mean by conditional constructions in ordinary language. Consider the following two English sentences:
If Oswald didn’t shoot Kennedy, then someone else did. If Oswald didn’t shoot Kennedy, then someone else would.
These express very different thoughts: the first means something like, Kennedy was shot by somebody, perhaps it was Oswald but if not then it was somebody else. The second means something like: Kennedy was doomed, even if Oswald had somehow been detained, someone else — who perhaps in fact did nothing — would have in that case done the shooting.
In philosophical discourse, it’s become standard to call the first kind of ordinary language conditional an “indicative” conditional, and the second one a “subjunctive” or “counterfactual” conditional. In fact, if you look into the linguistic details, these labels are unfortunate, but by now they’re very entrenched.
Oddly, some of this semester’s literature seems to use “counterfactual conditional” to mean any conditional inside the language that isn’t a material conditional. I noticed this because they said “counterfactual” where they seemed to be thinking of the first kind of Oswald conditional above, that most philosophers would call an “indicative” conditional. Those philosophers aren’t according with maintstream usage within philosophy, but apparently in certain circles maybe that’s how they roll. Or these philosophers may be assuming that all ordinary language conditionals can be reduced to just one. Anyway, when this comes up I’ll point it out.
The symbols >
(the greater-than sign) and →
are used in different texts to symbolize any of the four inside-a-language conditionals described above. If you see them, you’ll have to rely on context to figure out what’s meant. What a pain! For this reason, I try to avoid using them.
The symbols ⇒
and ≡
are sometimes used to express some kind of conditional (or for the second, biconditionals) inside a language. Other times they are used to express metalanguage relationships, like the turnstiles do.
For example, P ≡ Q
is sometimes used to express that P
and Q
are logically equivalent (that is, that each is a logical consequence of the other).
I would express that claim instead like this:
P ⫤⊨ Q
and I would express an inside-a-language biconditional like this:
P ⊂⊃ Q
These notations are somewhat less common, but they have the advantage of being less ambiguous and harder to misunderstand.
As we said, there are many different relations between formulas that get called consequence relations. There are some characteristics they all have in common that qualifies them to be referred to using that vocabulary (and the symbolisms described above).
This is the fact that for any formula Q
, Q ⊨ Q
.
This is the fact that for all sets of formulas Δ
and Γ
, if Δ, Γ ⊨ Q
, and also Γ ⊨ D
for every formula D ∈ Δ
, then Γ ⊨ Q
.
This is the fact that when Δ ⊨ Q
(Δ
may be empty), then for any further set of formulas Γ
, it also holds that Δ, Γ ⊨ Q
.
Monotonicity is a property of any logical consequence relation; though in a few weeks we’ll encounter other things called “consequence relations” where this no longer holds. Sometimes there are weaker variations of it that hold instead.