My Academia.edu Page w/ Publications

21 Aug 2018

Priest (CBS) An Introduction to Non-Classical Logic, collected brief summaries


by Corry Shores

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, Introduction to Non-Classical Logic, entry directory]

 

[The following collects the brief summaries of this text. For the directory without summaries, go here.]




Collected Brief Summaries of

 

Graham Priest

 

An Introduction to Non-Classical Logic:

From If to Is




Part I

Propositional Logic

 

 

 

ch.0

Mathematical Prolegomenon

 

0.1

Set-theoretic Notation

 

The following are definitions for basic set-theoretical notions that will appear throughout this book. [The following is quotation from pp.xxvii-xxix.]

 

set

A set, X, is a collection of objects. If the set comprises the objects a1, ... , an, this may be written as {a1, ... , an}. If it is the set of objects satisfying some condition, A(x), then it may be written as {x :A(x)}.

 

membership

a X means that a is a member of the set X, that is, a is one of the objects in X. aX means that a is not a member of X.

singleton

for any a, there is a set whose only member is a, written {a}. {a} is called a singleton (and is not to be confused with a itself).

 

empty set

There is also a set which has no members, the empty set; this is written as φ.

 

subset

A set, X, is a subset of a set, Y, if and only if every member of X is a member of Y. This is written as XY. The empty set is a subset of every set (including itself).

 

proper subset

XY means that X is a proper subset of Y; that is, everything in X is in Y, but there are some things in Y that are not in X. X and Y are identical sets, X = Y, if they have the same members, i.e., if X Y and YX. Hence, if X and Y are not identical, XY, either there are some members of X that are not in Y, or vice versa (or both).

 

union

The union of two sets, X, Y, is the set containing just those things that are in X or Y (or both). This is written as XY. So aX Y if and only if a X or aY.

 

intersection

The intersection of two sets, X, Y, is the set containing just those things that are in both X and Y. It is written XY. So aXY if and only if aX and aY.

 

relative complement

The relative complement of one set, X, with respect to another, Y, is the set of all things in Y but not in X. It is written YX. Thus, aYX if and only if aY but aX.

 

ordered pair

An ordered pair, ⟨a, b⟩, is a set whose members occur in the order shown, so that we know which is the first and which is the second. Similarly for an ordered triple, ⟨a, b, c⟩, quadruple, ⟨a, b, c, d⟩, and, in general, n-tuple, ⟨x1, . . . , xn⟩.

 

cartesian product

Given n sets X1, . . . , Xn, their cartesian product, X1×· · ·×Xn, is the set of all n-tuples, the first member of which is in X1, the second of which is in X2, etc. Thus, ⟨x1, . . . , xn⟩ ∈ X1×· · ·×Xn if and only if x1 X1 and . . . and xn Xn.

 

subset relation

A relation, R, between X1×· · ·×Xn is any subset of X1×· · ·×Xn. | ⟨x1, . . . , xn⟩ ∈ R is usually written as Rx1 . . . xn.

 

ternary and binary relations

If n is 3, the relation is a ternary relation. If n is 2, the relation is a binary relation, and Rx1 x2 is usually written as x1Rx2.

 

function

A function from X to Y is a binary relation, f , between X and Y, such that for all xX there is a unique yY such that xfy. More usually, in this case, we write: f(x) = y.

(Priest xxvii-xxix)

 

 

 

 

 

ch.1

Classical Logic and the Material Conditional

 

1.1

Introduction

 

The main purpose of logic is to provide an account of validity, which determines what follows from what. The account is given in a metalanguage for a formal, object language. There are two types of validity: {1} Semantic validity (symbolized ⊨) which preserves truth: every interpretation that makes the premises true also makes the conclusion true. {2} Proof-theoretic validity (symbolized ⊢)  which is determined by means of a procedure operating on a symbolization of the inference. Most contemporary logicians think that semantic validity is more fundamental than proof-theoretic, but it is good nonetheless to provide a proof-theoretic notion of validity to correspond with a semantic notion. A proof-theory is sound when “every proof-theoretically valid inference is semantically valid (so that ⊢ entails ⊨)” and it is complete when “every semantically valid inference is proof-theoretically valid (so that ⊨ entails ⊢)” (4).

 

 

1.2

The Syntax of the Object Language

 

Our object language has a formalized syntax with the following notations.

 

Propositional parameters (propositional variables):

p0, p1, p2, ....

 

Connectives:

¬ (negation), ∧ (conjunction), ∨ (disjunction), ⊃ (material conditional), ≡ (material equivalence)

 

Punctuation:

(, )

 

Arbitrary indistinct formulas:

A, B, C, ...

 

Arbitrary distinct formulas:

p, q, r, ...

 

Arbitrary sets of formulas:

Σ, Π, ...,

 

Empty set:

φ

Outer parentheses around complex formulas and curly brackets around finite sets are omitted. Well-formed formulas are either propositional parameters or complex formulas built up upon propositional parameters by means of the connectives.

 

 

1.3

Semantic Validity

 

An interpretation of an object language is a function, written v,  that assigns truth values to formulas, as for example: ν(p) = 1 and ν(q) = 0. For our classical logic semantics, the interpretation function assigns values for the connectives in the following way:

ν(¬A) = 1 if ν(A) = 0, and 0 otherwise.
ν(A ∧ B) = 1 if ν(A) = ν(B) = 1, and 0 otherwise.
ν(A ∨ B) = 1 if ν(A) = 1 or ν(B) = 1, and 0 otherwise.
ν(A ⊃ B) = 1 if ν(A) = 0 or ν(B) = 1, and 0 otherwise.
ν(A ≡ B) = 1 if ν(A) = ν(B), and 0 otherwise.

A conclusion A is a semantic consequence of a set of the premises Σ (that is, Σ ⊨ A) only if there is no interpretation that makes all the members of Σ true and A false, that is, only if every interpretation that makes all the members of Σ true makes A true as well. ‘Σ ⊭ A’ means there is not semantic consequence. A logical truth or tautology is a formula that is true under every evaluation, written for example as: ⊨ A. This also means it is a semantic consequence of the empty set of premises: φA.

 

 

1.4

Tableaux

 

We construct structures called tableaux to test for certain properties of arguments and formulas, especially validity and proof-theoretic consequence. The tableau has a structure of branches from a root down to tips. The structure can be displayed in the following way:

 

          ↓
          ∙
      ↙       ↘
    ∙           ∙
    ↓       ↙       ↘
    ∙     ∙          
Nodes are the dots. The top node is the root, and those at the bottom are the tips.  A branch is a path starting from the root and descending through a series of arrows as far as it can go.
 
         
         
      ↙     
    ∙          
    ↓              ↘
    ∙               

 
To use the tableaux for validity (proof-theoretic consequence), we will need to place the premises (if there are any) on a single branch along with the negation of the conclusion. This beginning set-up is called the initial list. We then proceed to develop the branches using various transformational rules, namely:
 
 Double Negation
Development (¬¬D)
............¬¬A
.............
.............A
 
Conjunction
Development (D)
...........A ∧ B
.............
.............A
.............
.............B
 
 Negated Conjunction
Development (¬D)

¬(A ∧ B)
↙   ↘
¬A       ¬B
 
 Disjunction
Development (∨D)
...........A ∨ B
...............
........A.........B
 
 Negated Disjunction
Development (¬D)
.........¬(A ∨ B)
.............
............¬A
.............
............¬B
 
 Conditional
Development (⊃D)
...........A ⊃ B
...............
.......¬A.........B
 
Negated Conditional
Development (¬⊃D)
..........¬(A ⊃ B)
..............
..............A
..............
.............¬B
 
 Biconditional
Development (≡D)
...........A ≡ B
...............
........A........¬A
.................
........B........¬B
 
 Negated Biconditional
Development (¬D)
.........¬(A ≡ B)
...............
........A........¬A
.................
.......¬B.........B
 
“A tableau is complete iff every rule that can be applied has been applied” (8). “A branch is closed iff there are formulas of the form A and ¬A on two of its nodes; otherwise it is open. A closed branch is indicated by writing an × at the bottom. A tableau itself is closed iff every branch is closed; otherwise it is open” (8). Furthermore: “A is a proof-theoretic consequence of the set of formulas Σ (Σ ⊢ A) iff there is a complete tree whose initial list comprises the members of Σ and the negation of A, and which is closed. We write A to mean that φ ⊢ A, that is, where the initial list of the tableau comprises just ¬A. ‘Σ ⊬ A means that it is not the case that Σ ⊢ A(8-9). Thus in this way we can use the tableau to test for proof-theoretic consequence(validity). If a branch closes, we do not need to develop it further. For practical convenience, we should try to make the tableau as simple as possible. One way to do this is by using non branch splitting rules before branch splitting ones. And, after applying a rule to a formula, it helps to place a tick mark next to it in order to signal that we may forget it.
 
 

1.6

Conditionals

 
(1.6.1) We now will examine conditionality in classical propositional logic. (1.6.2) A conditional contains two propositions. One is the consequent, which depends in some sense on the other proposition, called the antecedent. In English they are often formed using “if” or similar constructions. (1.6.3) When writing the antecedent or consequent by themselves, we often need to make changes to the verb tense or mood of the sentence, especially when formulating inferences. (1.6.4) Not all English “if” constructions are conditionals. We can test them by seeing if they can be expressed under the form ‘that A implies B’.
 
 

1.7

The Material Conditional

 
(1.7.1) The material conditional, symbolized as ⊃, is true when the antecedent is false or the consequent is true. It is thus logically equivalent to ¬AB. But also on that account, it generates the “paradoxes of material implication,” namely, BA B and ¬AA B. In other words, suppose we have some given formula that is true. That means we can make it a consequent in a conditional with any arbitrary antecedent. Or suppose we have a  negated formula as true, then we can make the formula’s unnegated form be the antecedent in a conditional with any arbitrary consequent. (1.7.2) The truth conditions for the material conditional allow for technically true but intuitively false sentences that fulfill the conditions for the material conditional but seem false on account of the irrelevance of the antecedent to the consequent, as for example, “If New York is in New Zealand then 2 + 2 = 4.” This seems to contradict the intuitive sense we ascribe to the English conditional, which involves relevance. (1.7.3) The counter-intuitive example conditional sentences are odd because they break certain rules of communication, namely to assert the strongest information.
 
 

1.8

Subjunctive and Counterfactual Conditionals

 

(1.8.1) A strong objection to the semantics of the material conditional and its application to natural language conditionals are sentences with similar antecedents and consequents but on account of subtle grammatical differences have opposite truth values. Priest’s examples are: {1} If Oswald didn’t shoot Kennedy someone else did. (which is true), and {2} If Oswald hadn’t shot Kennedy someone else would have. (which is false). (1.8.2) One common way to deal with the apparent inconsistency in the above examples is to distinguish them in terms of grammatical properties and say that one type is not a material conditional. When a conditional sentence is indicative, it could be material, but when it is subjunctive or counterfactual, often using “would,” it is not material. (1.8.3) The English conditional is probably not ambiguous between subjunctive and indicative moods, on account of explicit syntactical differences that maintain a clear distinction. (1.8.4) Conditionals are subjunctive when they articulate a temporal perspective located before the stated event or fact, and they are indicative if they articulate a temporal perspective where that event or fact is established.

 

 

1.9

More Counter-Examples

 

(1.9.1) There are three other counter-examples to the material conditional, and they present damning objections to the claim that the English conditional is material. {1} (AB) ⊃ C (A C) ∨ (B C); for example, “If you close switch x and switch y the light will go on. Hence, it is the case either that if you close switch x the light will go on, or that if you close switch y the light will go on.” {2} (A B) ∧ (C D) ⊢ (A D) ∨ (C B); for example, “If John is in Paris he is in France, and if John is in London he is in England. Hence, it is the case either that if John is in Paris he is in England, or that if he is in London he is in France.” And {3} ¬(A B) ⊢ A; for example, “It is not the case that if there is a good god the prayers of evil people will be answered. Hence, there is a god” (14-15). (1.9.2) We cannot dismiss these counter-examples on grammatical grounds, because they are all in the indicative mood. And we cannot dismiss them on conversational implicature grounds, because none break the rule of assert the strongest. (1.9.3) We cannot object that in fact the above counter-examples really are valid, provided we stipulate that the English conditional is material in those cases. For, by making that stipulation, we are admitting that naturally the English conditional is not material and is only artificially so. But the whole point of these objections is to show that the English conditional is naturally material.

 

 

1.10

Arguments for ⊃

 

(1.10.1) Even though the material conditional, ⊃, is not properly suited to describe the functioning of the English conditional, it had come to be regarded as such on account of there only being standard truth-table semantics until the 1960s, and the only plausible candidate in that semantics for “if” formations would be the material conditional. (1.10.2) However, there are notable arguments that the material conditional can be used to understand the English conditional, and they construe that relation in the following way: “‘If A then B’ is true iff ‘A B’ is true.” (1.10.3)  ‘If A then B’ is true then ¬AB is true. (1.10.4) Suppose A and ¬A B are true. By disjunctive syllogism: A, ¬A B B. This fulfills (*), when we take ¬A B as the C term. [Now, since A, ¬A B B fulfills the definition of the English conditional, and since A, ¬A B B also gives us the (modus ponens) logic of the conditional (given the equivalence of A B and ¬AB), that means the logic of the English conditional is adequately expressed by A B.] (1.10.5) Suppose A and ¬A B are true. By disjunctive syllogism: A, ¬A B B. This fulfills (*), when we take ¬A B as the C term. (Now, since A, ¬A B B fulfills the definition of the English conditional, and since A, ¬A B B also gives us the (modus ponens) logic of the conditional (given the equivalence of A B and ¬AB), that means the logic of the English conditional is adequately expressed by A B.) (1.10.6) What later proves important in the above argumentation is the use of disjunctive syllogism.

 

 

 

ch.2

Basic Modal Logic

 

2.1

Introduction

 

We will examine possible-world semantics and the most basic modal logic, K.

 

2.2

Necessity and Possibility

 

Modal logic deals with “the modes in which things may be true/false.” Such modes include possibility, necessity and impossibility. Modal semantics can employ the concept of possible worlds, which may be understood provisionally as a world situation that is a variation on our own, with it having slightly (or remarkably) different features. One world is possible relative to another if for example the one could actually become an outcome of the other.

 

 

2.3

Modal Semantics

 

In our modal semantics, we add to our propositional language two modal operators, □ for ‘necessarily the case that’ and ◊ for ‘possibly the case that’. An interpretation in our modal semantics takes the form ⟨W, R, v⟩, with W as the set of worlds, R as the accessibility relation, and v as the valuation function. ‘uRv’ can be understood as either, “world v is accessible from u,”  “in relation to u, situation v is possible,” or “world u access world v.” Negation, conjunction, and disjunction are evaluated (assigned 0 or 1) just as in classical propositional logic, except here we must specify in which world the valuation holds.

νwA) = 1 if νw(A) = 0, and 0 otherwise.

νw(AB) = 1 if νw(A) = νw (B) = 1, and 0 otherwise.

νw(AB) = 1 if νw(A) = 1 or νw (B) = 1, and 0 otherwise.

(21)

A formula is possibly true in one world if it is also true in another world that is possible in relation to the first. A formula is necessarily true in a world if it is also true in all worlds that are possible in relation to it.

For any world wW:

νw(◊A) = 1 if, for some w′W such that wRw′, νw′(A) = 1; and 0 otherwise.

νw(□A) = 1 if, for all w′ ∈ W such that wRw′, νw′(A) = 1; and 0 otherwise.

(22)

Given these definitions, we can conclude that if a world has no other related worlds, then any ◊A formulation will be false in that world (for, it cannot be true in any related world, as there are none), and any □formulation will be true (for, it is the case that it is true in every accessible world, as there are no accessible worlds). We can diagram the interpretation. Consider this example of an interpretation:

W = {w1, w2, w3}

w1Rw2, w1Rw3, w3Rw3

vw1 (p) = 0, vw1 (q) = 0;

vw2 (p) = 1, vw2 (q) = 1,

vw3 (p) = 1, vw3 (q) = o,

This is depicted as:

xxxxxxxxxxxxw2xxpxxq

xxxxxxxxxx

¬px¬qxxw1

xxxxxxxxxxx

xxxxxxxxxxxxw3xxpxx¬q

Each world (w1, w2, w3) is given its own place on the diagram. Arrows from one world to another indicate the accessibility of the first to the second. The rounded arrow (high above w3) thus means the accessibility of a world to itself. And all the true propositions in a world are listed in that world’s place on the diagram (so if a formula is valuated as 0, its negation is listed). Then, on the basis of our rules, we can infer the following other formulas for each world:

xxxxxxxxxxxxxxxxxxxw2xxxxxpxxxxxq

xxxxxxxxxxxxxxxxxxxxxxxxpqxxx¬q

¬pxxxxx¬qxxxxxw1

pq xxxpxxxxxx

xxxxxxxxxxxxxxxxxxw3xxxxxpxxxxx¬q

xxxxxxxxxxxxxxxxxxxxxxxxpxxxxxxxxxx

¬◊A at any world is equivalent to □¬A. And, ¬□A at any world is equivalent to ◊¬A. An inference is valid (as a semantic consequence) if it is truth-preserving in all worlds of all interpretations, (that is, if in all worlds in all interpretations, whenever the premises are true, so too is the conclusion). A logical truth (or tautology) is a formula that is true in all worlds of all interpretations.

Σ ⊨ A iff for all interpretations ⟨W, R, v⟩ and all w W: if νw(B) = 1 for all B ∈ Σ, then νw(A) = 1.

A iff φA, i.e., for all interpretations ⟨W, R, v⟩ and all w W, νw(A) = 1.

(23)

 

 

2.4

Modal Tableaux

 

(2.4.1) Tableaux in modal logic take the same branching node structure as those for propositional logic. However, the nodes themselves have a different structure, and there are two possible ones. {1} A, i, where A is a formula and i is a natural number indicating the world in which the formula holds, or {2} irj, where i is a natural number for a world that accesses world j, also given as a natural number (the r stays as r). (2.4.2) We test for validity by setting the premises to true in world 0 and the negation of the conclusion to true in world 0. (2.4.3) The tableaux rules for modal logic are the same as for non-modal propositional logic, except we indicate the worlds involved, and the branches inherit the world indicators from above (they are listed below with the new ones). (2.4.4) There are four new tableaux rules for modal operators. (Here we list all rules together):

 

 Double Negation

Development (¬¬D)

¬¬A,i

A,i

 

Conjunction

Development (D)

A ∧ B,i

A,i

B,i

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B),i

¬A ¬B,i

 

 Disjunction

Development (∨D)

A ∨ B,i

↙   ↘

A,i      B,i

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B),i

¬A,i

¬B,i

 

 Conditional

Development (⊃D)

A ⊃ B,i

↙    

¬A,i        B,i

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B),i

A,i

¬B,i

 

Negated Necessity

Development (¬□D)

¬A,i

¬A,i

 

Negated Possibility

Development D)

¬A,i

¬A,i

 

Relative Necessity

Development (□rD)

A,i

irj

A,j

(both A,i and irj must occur somewhere on the same branch, but in any order or location)

 

Relative Possibility

Development (rD)

A,i

irj

A,j

(j must be new: it cannot occur anywhere above on the branch)

(24)

(2.4.5) Branches close when there are contradictions in the same world. (2.4.6) Priest provides examples to show how the tableaux are made. (2.4.7) We make counter-models using completed open branches. We assign worlds in accordance with the i numbers. We assign R relations in accordance with irj formulations. And nodes of the form p, i we assign vwi(p) = 1. And for nodes of the form ¬p, i, we assign vwi(p) = 0. If there are neither of these two, then vwi(p) can be given any value we want. (2.4.8) Priest shows how to make a counter-model with an example. (2.4.9) These tableaux are both sound and complete.

 

 

2.5

Possible Worlds: Representation

 

Possible world semantics is mathematical machinery. But it represents certain real features of truth and validity. We wonder, what exactly do possible worlds and their semantics represent, philosophically speaking?

 

 

2.6

Modal Realism

 

(2.6.1) Modal realism is the view that possible worlds are real worlds that exist at different times and/or places. (2.6.2) The fact that modal realism is mind-boggling should not be a problem, because we allow modern physics to boggle our minds. (2.6.3) An objection to modal realism is that other possible worlds that are real would have to be physically related to our real world and thus be extensions of our world or co-partitions of one ultimate real world. The reply is that the fact that the other possible worlds are not spatially, temporally, or causally connected to our means they have no physical connection and thus cannot be extensions or co-partitions with our world. (2.6.4) Still, the objector can give the example of black holes where it is conceivable that there is a part of this world that is not spatially, temporally, or causally related to the rest of our world. (2.6.5) It can also be objected that we should not define possibility itself in terms of alternate reality or physically disconnected actuality, because intuitively we do not think that some actuality in our present world demonstrates its possibility in other times or places of our world.

 

 

2.7

Modal Actualism

 

(2.7.1) Under modal actualism, possible worlds are understood not as physically real entities, like in modal realism, but rather as abstract entities, like numbers. (2.7.2) One version of modal actualism understands a possible world as a set of propositions or other language-like entities and as being “individuated by the set of things true at it, which is just the set of propositions it contains” (29). (2.7.3) One problem with the propositional understanding of possible worlds is that there are many sorts of sets of propositions, but not all constitute worlds. For example, “a set that contains two propositions but not their conjunction could not be a possible world” (29). (2.7.4) A big problem with the propositional understanding of possible worlds is that in order for propositions to form a world, we need to know which inferences follow validly from others. Then, after knowing that, we can apply the mathematical machinery to explain which inferences are valid. But as you can see, the mathematical machinery, which is was we are trying to substantiate with this propositional account, is made useless, as it is what is supposed to determine validity, not take validity ready-made and redundantly confirm it. (2.7.5) To avoid this problem of validity, there is another sort of modal actualism called combinatorialism. Here a possible world is understood as “the set of things in this world, rearranged in a different way. So in this world, my house is in Australia, and not China; but rearrange things, and it could be in China, and not Australia” (30). (2.7.6) Because arrangements are abstract objects, combinatorialism is a sort of modal actualism. And because combinations can be explained without the notion of validity, combinatorialism avoids the problems of validity that the propositional understanding suffered from. (2.7.7) One big problem with combinatorialism is that it is unable to generate all possible worlds. For, there could be objects in other possible worlds not found in our world or in any other possible world obtained by rearranging the objects in our world.

 

 

2.8

Meinongianism

 

(2.8.1) In modal realism, possible worlds and their members are concrete objects, and in modal actualism, they are abstract objects. In both cases, they are existing objects. Now we will consider the idea that they are non-existent objects (a position called Meinongianism). (2.8.2) We are already familiar with such non-existent things as Santa Claus and phlogiston. We can think of possible worlds in the same light. (2.8.3) Meinong famously held that there are non-existent objects, and the arguments against his position are not especially cogent. (2.8.4) An example of an uncogent argument against Meinongism is that non-existent possible worlds cannot causally interact with us, and thus we can know nothing about them. Yet, this objection would also hold for modal actualism and modal realism too. (2.8.5) That same objection also fails to take into account the fact that we do know facts about certain non-existent objects on account of these facts being stipulated, for example: “Holmes lived in Baker Street – and not Oxford Street – because Conan Doyle decided it was so” (31). (2.8.6) Priest ends by noting that {1} the aforementioned ideas do not settle the matter, as there are more suggestions to consider, and {2} there are more objections to consider.

 

 

 

ch.3

Normal Modal Logics

 

3.1

Introduction

 

In this chapter, we examine some extensions of the modal logic K. We also address the question of which modal logics are most suitable for certain sorts of necessity, and we end by examining tense logics with more than one pair of modal operators.

 

 

3.2

Semantics for Normal Modal Logics

 

We distinguish the types of modal logic by subscripting their name to the turnstile, as for example: ⊨K. There are different classes of modal logics. Normal logics are the most important class, and K is the most basic of them. The different modal logics are defined according to certain constraints on the accessibility relation, R, including:

ρ (rho), reflexivity: for all w, wRw.

σ (sigma), symmetry: for all w1, w2, if w1Rw2, then w2Rw1.

τ (tau), transitivity: for all w1, w2, w3, if w1Rw2 and w2Rw3, then w1Rw3.

η (eta), extendability: for all w1, there is a w2 such that w1Rw2.

An interpretation in which R satisfies conditions ρ (or  σ, etc.) is a ρ-interpretation (or a σ-interpretation, etc.). A logic defined in terms of truth preservation over all worlds of all ρ-interpretations is called Kρ (or , etc). The consequence relation of such a logic is written ⊨Kρ (or ⊨Kσ, etc.). So we would say for example that Σ ⊨Kρ A if and only if for all ρ-interpretations ⟨W, R, v⟩, and all wW, if vw(B) = 1 for all B ∈ Σ, then vw(A) = 1. We can combine the R conditions to get additional sorts of interpretations, like a ρσ-interpretation for example. Then, the logic Kστ is the consequence relation defined over all στ-interpretations. There are some conventional names for certain various such logics, like S5 for Kρστ, S4 for Kρτ, and B for Kρσ. In nearly all cases, the conditions on R are independent, and they can be mixed and matched at will. Every normal modal logic, L, is an extension of K, in the sense that if Σ ⊨K A then Σ ⊨L A. A restricted K modal logic will have fewer interpretations than K on account of many of the K interpretations not meeting the restriction’s criterion. However, these restrictions also happen to allow the restricted K logics to make more inferences valid. Thus there is an inverse relation between inferences and interpretations with respect to the effects of the restrictions. For this reason Kρσ is an extension of Kρ; Kρστ is an extension of Kρσ, and so on.

 

 

3.3

Tableaux for Normal Modal Logics

 

(3.3.1) To make tableaux for other normal modal logics, we will add rules regarding the R accessibility relation. (3.3.2) The tableaux for the different normal modal logics take rules reflecting the properties of the accessibility relations that characterize them.

Tableaux Rules for Kρ, Kσ, and Kτ

ρ

.

iri

.

.

ρrD”

σ

irj

jri

.

.

σrD”

 

τ

irj

jrk

.irk

.

τrD”

(3.3.3) In the first tableau example for normal modal logics, we learn that p p is valid in Kρ but not in K; thus Kρ is a proper extension of K. (3.3.4) In Priest’s second example, we learn that p ⊃ □◊p is not valid in K but it is valid in Kσ, thus Kσ is a proper extension of K. (3.3.5) In the third of Priest’s examples, we learn that □p ⊃ □□p is not valid in K but it is valid in Kτ, thus Kτ is a proper extension of K. (3.3.6) For compound systems, we must apply the rules for each restriction. When making the tableau, we should apply the ◊-rule first. Then secondly we compute and add all the needed new facts about r that then arise. Lastly we should backtrack whenever necessary to apply the □-rule in cases of r where it is required. (3.3.7) We make counter-models by assigning worlds in accordance with the i numbers on an open branch, r relations in accordance with the irj formulations, p,i formulations as  vwi(p) = 1, ¬p,i formulations as vwi(p) = 0, and if neither of those two cases show for some p, we can assign it any value we want. (3.3.8) These tableaux are both sound and complete.

 

 

3.5

S5

 

(3.5.1) The normal modal logic S5 has the universal or υ (upsilon) constraint, meaning that every world relates to every other world: for all w1 and w2, w1Rw2. (3.5.2) Given that under an υ-interpretation, all worlds access all others, we need not be concerned with the parts of our semantic evaluation rules that mention the R relation. As such, we evaluate necessity and possibility operators in the following way:

vw(□A) = 1 iff for all w′ ∈ W, vw(A) = 1

vw(◊A) = 1 iff for some w′W , vw′(A) = 1

(3.5.3) We make our tableaux for S5 using the tableau rules for modal logic, but eliminating the r designations; and: “Applying the ◊-rule to ◊A,i gives a new line of the form A,j (new j); and in applying the □-rule to □A,i, we add A,j for every j” (45).

S5 Relative Necessity

Development (□rD)

A,i

A,j

(for every j)

 

S5 Relative Possibility

Development (rD)

A,i

A,j

(j must be new: it cannot occur anywhere above on the branch)

(modified from p.24, section 2.4.4)

 

(3.5.4) Kρστ and Kυ are equivalent logical systems, because whatever is semantically valid in the one is semantically valid in the other. (3.5.5) S5 stands for both Kυ and Kρστ, on account of their logical equivalence. (3.5.6) S numbering indicates the system’s relative strength.

 

 

3.6a

The Tense Logic Kt

 

(3.6a.1) We will now examine tense logic. (3.6a.2) The semantics of tense logic are the same as modal logic, only with some modifications to reflect certain temporal senses. The notion of succession is modeled with the accessibility relation such that w1Rw2  has the intuitive sense: ‘w1 is earlier than w2’. “□A means something like ‘at all later times, A’, and ◊A as ‘at some later time, A’,” but “we will now write □ and ◊ as [F] and ⟨F⟩, respectively. (The F is for ‘future’)” (49). (3.6a.3) The tense logic operators for the past are [P] and ⟨P⟩, which correspond semantically to □ and ◊. (3.6a.4) We evaluate the tense operators in the following way:

vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1

vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1

vw([F]A) = 1 iff for all w′ such that wRw′, vw(A) = 1

vw(⟨FA) = 1 iff for some w′ such that wRw′, vw(A) = 1

(50, with the future operator formulations being my guesses.)

(3.6a.5) “If, in an interpretation, R may be any relation, we have the tense-logic analogue of the modal logic, K, usually written as Kt” (50). (3.6a.6) The tableaux rules for the tense operators is much like for necessity and possibility only we need to keep in mind the order of r formulations for the different tenses. Priest provides the following tableau rules for the tense operators.

 

Full Future

Development ([F]D)

[F]A,i

irj

A,j

(For all j)

 

Partial Future

Development (⟨F⟩D)

⟨F⟩A,i

irj

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Future

Development (¬[F]D)

¬[F]A,i

⟨F⟩¬A,i

 

Negated Partial Future

Development (¬⟨F⟩D)

¬⟨F⟩A,i

[F]¬A,i

 

Full Past

Development ([P]D)

[P]A,i

jri

A,j

 

(For all j)

 

Partial Past

Development (⟨P⟩D)

⟨P⟩A,i

jri

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Past

Development (¬[P]D)

¬[P]A,i

⟨P⟩¬A,i

 

Negated Partial Past Development (¬⟨P⟩D)

¬⟨P⟩A,i

⟨P⟩¬A,i

(50, with my added names and other data at the bottoms)

 

(3.6a.7) Priest then gives a tableau example. (3.6a.8) Priest then shows how to construct a counter-model in tense logic, using an example. (We use the same procedure given in section 2.4.7.) (3.6a.9) We can think of time going in reverse, from the future, moving backward through the past, by taking the converse R relation (yRx becomes xŘy) (and/or by converting all F’s to P’s and vice versa ).

 

 

3.6b

Extensions of Kt

 

(3.6b.1) We can apply constraints on the accessibility relation to obtain extensions of our modal tense logic Kt. (3.6b.2) These constraints on R condition the way ‘x is before y’ behaves. For example, the transitivity constraint makes beforeness transitive, and we can represent the beginninglessness or endlessness of time using the extendability constraint. (3.6b.3) Priest next notes some natural constraints for tense logic. {1} denseness  (δ): if xRy then for some z, xRz and zRy, which places a moment between any two others; {2} forward convergence (ϕ): if xRy and xRz then (yRz or y = z or zRy); that is to say, when two moments come after some given moment, then they cannot belong to two distinct futures but must instead fall along the same timeline; and {3}  backward convergence (β): if yRx and zRx then (yRz or y = z or zRy); in other words, two moments coming before some given moment must fall along a single succession. (3.6b.4) Priest next gives the tableau rules for constrained tense logics. (For convenience, I have here added ones in later sections to keep all the rules in one place.

 

 Double Negation

Development (¬¬D)

¬¬A,i

A,i

 

Conjunction

Development (D)

A ∧ B,i

A,i

B,i

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B),i

¬A ¬B,i

 

 Disjunction

Development (∨D)

A ∨ B,i

↙   ↘

A,i      B,i

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B),i

¬A,i

¬B,i

 

 Conditional

Development (⊃D)

A ⊃ B,i

↙    

¬A,i        B,i

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B),i

A,i

¬B,i

(p.24, section 2.4.3 and 2.4.4)

 

Full Future

Development ([F]D)

[F]A,i

irj

A,j

(For all j)

 

Partial Future

Development (⟨F⟩D)

⟨F⟩A,i

irj

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Future

Development (¬[F]D)

¬[F]A,i

⟨F⟩¬A,i

 

Negated Partial Future

Development (¬⟨F⟩D)

¬⟨F⟩A,i

[F]¬A,i

 

Full Past

Development ([P]D)

[P]A,i

jri

A,j

 

(For all j)

 

Partial Past

Development (⟨P⟩D)

⟨P⟩A,i

jri

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Past

Development (¬[P]D)

¬[P]A,i

⟨P⟩¬A,i

 

Negated Partial Past Development (¬⟨P⟩D)

¬⟨P⟩A,i

⟨P⟩¬A,i

(p.50, section 3.6a.6, with my added names and other data at the bottoms)

 

α= World Equality (α=D)

α(i)

j=i

α(j)

.

α(i)

i=j

α(j)

.

where α(i) is a line of the tableau containing an ‘i’. 

α(j) is the same, with ‘j’ replacing ‘i’. Thus:

if α(i) is A, i, α(j) is A, j

if α(i) is kri, α(j) is krj

if α(i) is i = k, α(j) is j = k  

(53; with my naming additions and copied text at the bottom)

 

ρ, Reflexivity (ρrD)

ρ

.

iri

 

σ, Symmetry (σrD)

σ

irj

jri

 

τ, Transitivity (τrD)

τ

irj

jrk

irk

 

η, Extendability (ηrD)

η

.

irj

.

It is applied to any integer, i, on a branch, provided that there is not already something of the form irj on the branch, and the j in question must then be new.

 

δ, Denseness (δrD)

δ

irj

irk

krj

.

where k is new to the branch.

 

ϕ, Forward Convergence (δrD)

ϕ

irj

irk

↙     

jrk   j=k   krj

.

where i, j and k are distinct.

 

β, Backward Convergence (βrD)

β

jri

kri

↙     

jrk   j=k   krj

.

where i, j and k are distinct.

(53; with my naming additions and copied text at the bottom)

 

(3.6b.5) In order to give the tableau rules for the ϕ and β constraints, we first need the rules for world equality. (See the table “α= World Equality (α=D)” above.) (3.6b.6) Priest next gives the rules for the ϕ and β constraints. (See them above.) (3.6b.7) Priest next gives an example tableau. (3.6b.8) We make counter-models in the following way. “For each number, i, that occurs on the branch, there is a world, wi; wiRwj iff irj occurs on the branch; for every propositional parameter, p, if p, i occurs on the branch, vwi(p) = 1, if ¬p, i occurs on the branch, vwi(p) = 0 (and if neither, vwi(p) can be anything one wishes)” (p.27); however, “whenever there is a bunch of lines of the form i = j, j = k, . . . , we choose only one of the numbers, say i, and ignore the others” (54). (3.6b.9) “The tableaux for Kt and its various extensions are sound and complete with respect to their semantics” (55). (3.6b.10) The past cannot be altered, so there can only be one timeline of past events. But supposing that the future can be altered, then there are numerous timelines of sequences of future events. “Thus, one might suppose, time satisfies the condition β of backward convergence, but not the condition ϕ of forward convergence” (55-56). (3.6b.11) But just because there are two possible futures –  ◊⟨F⟩p ∧ ◊⟨F⟩¬p) –  does not mean there are two actual futures –  ⟨F⟩p∧⟨F⟩¬p.




ch.4

Non-Normal Modal Logics; Strict Conditionals

 

4.1

Introduction

 
(4.1.1) In the following sections of this chapter, we will examine non-normal modal logics. They involve non-normal worlds, which are ones with different truth conditions for the modal operators. (4.1.2) Following that in the chapter is an examination of the strict conditional.
 
 

4.2

Non-Normal Worlds

 

(4.2.1) We will first examine the technical elements of non-normality. (4.2.2) Our interpretations of non-normal modal logics take the structure ⟨W, N, R, v⟩. W is the set of worlds. R is the accessibility relation. v is the valuation function. And N is the set of normal worlds, with all the remaining worlds in W being non-normal ones. (4.2.3) The semantics are the same for non-normal worlds, except that at non-normal worlds, all necessary propositions (those starting with □) are always false, and all possible propositions (those starting with ◊) are always true. For, in non-normal worlds, nothing is necessary and all is possible. (4.2.4) At every world, including non-normal ones, ¬□A and ◊¬A have the same truth value. ¬◊A and □¬A do too. (4.2.5) Inferences are valid only if they preserve truth in all interpretations at all normal worlds. (4.2.6) Non-normal modal logics with the structure ⟨W, N, R, v⟩ in which R is a binary relation on W are called N, with such R constraints as ρ, σ, τ etc. creating extensions of N like , etc. (So here we have N for non-normal modal logics where we previously had K and its extensions for normal modal logics.) And, “As for normal logics, Nρτ is an extension of Nρ, which is an extension of N, etc.” (4.2.7) Nρ = S2; Nρτ = S3; and Nρστ = S3.5, with the first two S’s being “Lewis systems” and the last one being a “non-Lewis system”. (4.2.8) Although non-normal worlds originally were fashioned solely for technical reasons, in fact they have a philosophical meaning too.

 

 

4.3

Tableaux for Non-Normal Modal Logics

 

(4.3.1) The tableau rules for non-normal modal logics N are mostly the same as for normal modal logics K. We need however to add the following exception: “If world i occurs on a branch of a tableau, call it □-inhabited if there is some node of the form □B,i on the branch. The rule for ◊A,i is activated only when i = 0 or i is □-inhabited” (65). (4.3.2) The ◊-rule (that if you have ◊A,i on one node you can obtain that there is an accessible world where A holds) applies only to normal worlds, because possibility in non-normal worlds does not require it holding in an accessible world; rather, simply all possibilities are true regardless of other worlds. (4.3.3) Priest gives an example showing how the ◊-rule is applied when dealing with world 0. (4.3.4) Priest gives another example where we see that as world 1 is not □-inhabited, we do not apply the ◊-rule to a case in world 1 where there is the possibility operator. (4.3.5) We form counter-examples while keeping in mind which worlds are non-normal. We assign worlds in accordance with the i numbers. We assign R relations in accordance with irj formulations. And nodes of the form p, i we assign vwi(p) = 1. And for nodes of the form ¬p, i, we assign vwi(p) = 0. If there are neither of these two, then vwi(p) can be given any value we want . (4.3.6) Priest gives an example of a counter-model. When depicting non-normal worlds, we place the world designator in a box but write the true formulas in that world above the box. (4.3.7) Tableaux for Nρ, Nρτ, etc. use the same additional rules as for Kρ, Kρτ,etc. (4.3.8) “The tableaux for N and its extensions are sound and complete” (67).

 

 

4.4

The Properties of Non-Normal Logics

 

(4.4.1) K interpretations are special cases of N interpretations, where all the worlds are normal (W = N). This means that K is an extension of N, because “if truth is preserved at all worlds of all N-interpretations, it is preserved at all worlds of all K-interpretations” (67). (4.4.2) The extensions of K are also extensions of their respective N extensions, such that for example “Kστ is an extension of Nστ and so on” (67). (4.4.3) Each K-logic is a proper extension of its corresponding N-logic. (4.4.4) Kρστ is the strongest logic we have seen so far. It is a normal logic, and every other normal logic we have seen is contained in it. Moreover, every non-normal system we have seen is contained in its corresponding normal system (with none yet being stronger than Kρστ  and thus every non-normal system is weaker than Kρστ). In fact, “N is the weakest system we have met. It is contained in every non-normal system, and also in K, and so in every normal system” (68). (4.4.5) If we now instead define logical validity as truth preservation over all worlds, including non-normal ones, then certain formulas like □(A ∨ ¬A) will no longer be valid, because no necessary formulations are true in non-normal worlds. (4.4.6) The Rule of Necessitation is: for any normal system, ℒ, if ⊨ A then ⊨ □A. (4.4.7) The Rule of Necessitation fails in non-normal systems, because it will not work when applied doubly on the same formula. (4.4.8) On account of the failure of the Rule of Necessitation in non-normal systems, “Non-normal worlds are, thus, worlds where ‘logic is not guaranteed to hold’” (69).

 

 

 

4.4a

S0.5

 

(4.4a.1) L is a type of non-normal modal logic. Modal formulas are “sentences of the form □A and ◊A.” And in L, “modal formulas are assigned arbitrary truth values at non-normal worlds” (69). (4.4a.2) In L, the evaluation function v “assigns each modal formula a truth value at every non-normal world” (69). (4.4a.3) The tableau rules for L are the same as for N, except “there are no rules applying to modal formulas or their negations at worlds other than 0. That is, the rules of 2.4.4 apply at world 0 and world 0 only” (69). (4.4a.4) Priest then gives an example of how to create the tableau for a valid formula and another for an invalid one. (4.4a.5) We fashion counter-models from open branches in tableaux in L in the following way. Worlds in W are assigned according to i numbers: “For each number, i, that occurs on the branch, there is a world, wi” (p.27, section 2.4.7). World 0 is the only normal world, and all others are non-normal: “N = {w0}” (70). Accessibility relations follow the irj formulations: “wiRwj iff irj occurs on the branch” (p.27, section 2.4.7). Atomic propositional formulas are true for their indicated world, and negated atomic formulas are false for their world: “for every propositional parameter, p, if p,i occurs on the branch, vwi(p) = 1, if ¬p,i occurs on the branch, vwi(p) = 0 (and if neither, vwi(p) can be anything one wishes” (p.27, section 2.4.7). All necessity and possibility operated formulas, no matter how complex, in non-normal worlds are assigned truth values in the same way: “if i > 0, and □A,i, is on the branch, vwi (□A) = 1; if ¬□A,i, is on the branch, vwi (□A) = 0; similarly for ◊A” (70). (4.4a.6) Priest then provides an example of a counter-model. (4.4a.7) We obtain extensions of L by applying constraints on the accessibility relation, as for example ρ (reflexivity), σ (symmetry), and τ (transitivity). (4.4a.8) “The tableaux for L and its extensions are sound and complete with respect to their semantics” (70). (4.4a.9) On account of the historical development of modal logics, S0.50 and S0.5 are L and Lρ respectively, only without the possibility operator used in them. (4.4a.10) We cannot define the possibility operator for L in the same way as for K and N, i.e., ‘◊A’ as ‘¬□¬A’, because these formulas do not necessarily have the same truth value in all worlds for L. (4.4a.11) “If we wish to make ◊ behave in L as it does when it is defined, we have to add an extra constraint: for every world, w, vw(◊A) = vw(¬□¬A) (that is, vw(◊A) = 1 − vw(□¬A)” (71). (4.4a.12) But this equivalence breaks down in non-normal worlds in L. (4.4a.13) N is a proper extension of L, and L is the weakest modal logic we have seen so far. (4.4a.14) Since L has non-normal worlds, the Rule of Necessitation fails in L. Thus, “that ‘logic need not hold’ at non-normal worlds in L is patent: if A is a logical truth, □A can behave any old way at such a world” (71). (4.4a.15) “□A is valid in L (and Lρ) iff A is a truth-functional tautology, or, more accurately, is valid in virtue of its truth-functional structure” (71).

 

 

4.5

Strict Conditionals

 

(4.5.1) We now will examine the conditional in the context of modal logic. (4.5.2) There are contingently true material conditionals that we would not want to say are true on account of their contingency, like, “The sun is shining ⊃ Canberra is the federal capital of Australia.” For, “Things could have been quite otherwise, in which case the material conditional would have been false” (72). To remedy this, we could define the conditional as: “‘if A then B’ as □(AB), where □ expresses an appropriate notion of necessity” (72). (4.5.3) This definition of the conditional using modal logic is called the strict conditional, symbolized as AB and defined as □(AB). (4.5.4) The strict conditional does not validate the problematic counter-examples (that we have seen) for the material conditional.

 

 

4.6

The Paradoxes of Strict Implication

 

(4.6.1) We wonder if the definition of the strict conditional – AB is defined as □(AB) –  is adequate. But first we need to address the matter of its variance under different systems of modal logic. (4.6.2) To model conditionality in general and the strict conditional in particular, we need modus ponens to hold, as it is a basic inferential principle  that should hold when the conditional has its normal semantics. But in systems without the ρ-constraint (reflexivity), modus ponens will fail. Thus our system at least needs the ρ-constraint . (4.6.3) We need not narrow our systems down any further than systems with the ρ-constraint, because no matter what, they will all lead to the paradoxes of strict implication:  ‘□B A B’, ‘¬◊AA B’; and also ‘⊨ A ⥽ (B ∨ ¬B) ’, ‘⊨ (A ∧ ¬A) ⥽ B’.

 

 

4.8

The Explosion of Contradictions

 

(4.8.1) One of the paradoxes of the strict conditional is: ⊨ (A ∧ ¬A) ⥽ B. By modus ponens we derive: (A∧¬A)⊨B. In other words, contradictions entail everything (any arbitrary formula whatsoever). But this is counter-intuitive, and there are counter-examples that we will consider. (4.8.2) The first counter-example: Bohr knowingly combined inconsistent assumptions in his model of the atom, but on that account the model functioned well. However, explosion does not hold here, because we cannot on the basis of the contradiction infer everything else, like electronic orbits being rectangles. (4.8.3) The second counter-example: we can have inconsistent laws without their contradiction entailing everything. (4.8.4) The third counter-example: there are perceptual illusions that give us inconsistent impressions without giving us all impressions. For example, the waterfall illusion gives us the impression of something moving and not moving, but it does not thereby also give us every other impression whatsoever. The fourth counter-example: there can be fictional situations where contradictions hold but that thereby not all things hold as well.

 

 

4.9

Lewis’ Argument for Explosion


(4.9.1) Strict conditionals do not require relevance, as we see for example with: ⊨ (A ∧ ¬A) ⥽ B. So we might object to them on this basis. (4.9.2) C.I. Lewis argues that (A ∧ ¬A) ⥽ B is intuitively valid, because from A ∧ ¬A it is intuitively valid to infer A and ¬A; from ¬A it is intuitively valid to infer ¬A B, and from A and ¬A B it is intuitively valid, by disjunctive syllogism, to derive B. [Now, if each step has a connection on the basis of its intuitive validity, that means the final conclusion B should have a connection, by extension, to A ∧ ¬A on the basis of the intuitively valid steps leading from the premise to the final conclusion. So despite objections to the contrary, there is a connection between the antecedent and consequent in (A ∧ ¬A) ⥽ B, according to Lewis. (4.9.3) C.I. Lewis also formulates an argument for the connection between antecedent and conclusion for A ⥽ (B ∨ ¬B), but this argument is a bit less convincing than the one for (A ∧ ¬A) ⥽ B.




ch.5

Conditional Logics

 

5.1

Introduction

 

(5.1.1) We look now at conditional logics, which are modal logics with “a multiplicity of accessibility relations of a certain kind” (82). (5.1.2) We will also consider some more problematic inferences involving the conditional.

 

 

5.2

Some More Problematic Inferences

 

(5.2.1) There are three inferences involving the conditional that are valid in classical logic and for the strict conditional, but as we will see in the next section, they are problematic. They are: {1} Antecedent strengthening: AB ⊨ (AC) ⊃ B; {2} Transitivity: AB , BCAC; and Contraposition: AB ⊨ ¬B ⊃ ¬A. (5.2.2) Here are the problematic counter-example illustrations. {1} Antecedent strengthening: AB ⊨ (AC) ⊃ B; “If it does not rain tomorrow we will go to the cricket. Hence, if it does not rain tomorrow and I am killed in a car accident tonight then we will go to the cricket.” {2} Transitivity: AB , BCAC; “If the other candidates pull out, John will get the job. If John gets the job, the other candidates will be disappointed. Hence, if the other candidates pull out, they will be disappointed.” {3} Contraposition: AB ⊨ ¬B ⊃ ¬A;If we take the car then it won’t break down en route. Hence, if the car does break down en route, we didn’t take it.” (5.2.3) One might reply to the above objections by saying that they are enthymemes and thus would be valid were we to supply the right relevant information among the premises. (5.2.4) When we supply additional relevant material to the premises of these counter-example illustrations, they show their validity. (5.2.5) But since in such illustration counter-examples we cannot explicitly list all circumstances in the premises that are needed for the argument to be purely non-enthymemic, then this objection does not work absolutely sufficiently yet. (5.2.6) But in fact we can capture all of these infinitely many needed additional de-enthymemizing clauses by simply saying for all of them, “other things being equal,” which is called a  ceteris paribus clause. (5.2.7) Ceteris paribus clauses {1} are conditioned by the other antecedent term they are conjoined with, because that term might require particular clauses be implied while others be excluded and {2} are context-dependent. (5.2.8) ‘A > B’ means a conditional with a ceteris paribus clause. And, “A > B is true (at a world) if B is true at every (accessible) world at which A CA is true” (84).

 

 

 

ch.6

Intuitionistic Logic

 

6.1

Introduction

 

In this chapter we will examine intuitionistic logic. It arose from intuitionism in mathematics, and it has a natural possible world semantics. We will also examine its philosophical foundations and its account of the conditional.

 

 

6.2

Intuitionism: The Rationale

 

(6.2.1) To understand the original rationale for intuitionism, we should note that we can understand strange sentences that we never heard before, like, “Granny had led a sedate life until she decided to start pushing crack on a small tropical island just south of the Equator.” (6.2.2) We can understand such complex unfamiliar sentences on account of compositionality, which says that “the meaning of a sentence is determined by the meanings of its parts, and of the grammatical construction which composes these” (103). (6.2.3) An orthodox view of meaning is that the meaning of a statement is given by its truth conditions (“the conditions under which it is true”). On account of compositionality, statements built up using connectives are determined on the basis of the connectives’ truth-functional operation on the  truth conditions of the constituent statements. (6.2.4) The common notion of truth is that it is a  correspondence between what a linguistic formulation says and an extra-linguistic reality in which that said circumstance in fact holds. But we think there are mathematical truths and meaningful formulations, yet the idea of an extra-linguistic reality is problematic in mathematics, as we will see. (6.2.5) Mathematical realists hold that there is an extra-linguistic reality corresponding to the truths of mathematical formulations like “2 + 3 = 5;” they think for example that there are “objectively existing mathematical objects, like 3 and 5.” Intuitionists however see this as a sort of mystical view and think rather that we should not apply the correspondence theory of truth to mathematical formulations. (6.2.6) Intuitionism expresses a statement’s meaning on the basis of its proof conditions, which are the conditions under which the sentence is proved. (6.2.7) The proof condition of a simple sentence is whatever we would take to be a sufficient proof [as for example a sufficient mathematical proof for a mathematical formula.] The proof conditions for complex sentences built up using connectives will be similar to the normal conditions only now using the notion of proof (note that ⇁ and ⊐ symbolize negation and the conditional):

A proof of A B is a pair comprising a proof of A and a proof of B.

A proof of A B is a proof of A or a proof of B.

A proof of ⇁A is a proof that there is no proof of A.

A proof of A B is A construction that, given any proof of A, can be applied to give a proof of B.

(104)

(6.2.8) These proof conditions cannot validate excluded middle, because there are formulas that cannot be proved nor can it be proven that there is no proof for them.

 

6.3

Possible Worlds Semantics for Intuitionism

 

(6.3.1) We will examine a certain sort of possible world semantics to capture the ideas of intuitionistic logic. (6.3.2) The only connectives in our intuitionist logic are ∧, ∨, ⇁ and ⊐ (with the last two being negation and the conditional, respectively). (6.3.3) Our intuitionistic possible worlds semantics takes the structure ⟨W, R, v⟩. It is mostly the same as logic Kρτ, meaning that it is a normal modal logic in which the R accessibility relation is reflexive (all worlds have access to themselves) and transitive (whenever a first world has access to a second and that second to a third, then the first has access to that third as well.) There is one additional constraint, called the heredity condition, which means that when a proposition is true in one world, it it is true in all other worlds that are accessible from it. (6.3.4) By means of certain rules we evaluate molecular formulas. Negation and the conditional involve accessible worlds. (6.3.5) The heredity condition holds not just for propositional parameters but for all formulas. (6.3.6) To see how the above interpretation captures intuitionist ideas, we first conceive of the way that information accumulates over time as being like one world (like our world at one moment) as being a set of proven things and another world accessible from the first having the same proven things and maybe more (like our world progressing later into a world perhaps with more information). (6.3.7) The possible world semantics for intuitionism captures the ideas in the proof conditions. (6.3.8) We define validity in intuitionistic logic as truth preservation over all worlds of all interpretations, and we write intuitionistic logical consequence as ⊨I. (6.3.9) If there is only one world, the intuitionistic interpretation is equivalent to a classical one. And intuitionistic logic is a sub-logic of classical logic, because everything that is intuitionistically valid is classical valid, but not everything classical valid is intuitionistically valid. (6.3.10) By adding constraints the R accessibility relation in intuitionistic logics, we can generate stronger ones.

 

 

 

 

6.4

Tableaux for Intuitionistic Logic

 

(6.4.1) Our tableaux for intuitionistic logic will build from those for modal logic, but with some modifications. In modal logic, the nodes take one of two forms: {1} A,i, where A is a formula and i is a natural number indicating the world in which the formula holds, or {2} irj, where i is a natural number for a world that accesses world j, also given as a natural number (the r stays as r). For our intuitionistic tableaux, “The first modification is that a node on the tableau is now of the form A,+i or A,−i. The first means, intuitively, that A is true at world i; the second means that A is false at i” (107) Previously we did not need this information in the tableaux about truth and falsity, because A’s being false in a world was equivalent to its negation being true, and so we would represent that with ¬A,i. (It seems then that being “false” or at least lacking a proof, here in intuitionistic systems, means either that {1} within some world, there is a disproof (a proof that there is no proof)  for a formula, in other words, that for instance ⇁A in world 0 (maybe written as vwo (⇁A) = 1 and) symbolized as ⇁A,+0 in the tableaux, which means that it is the case that there is a disproof for A in world 0, or that {2} there is currently neither a proof for a formula nor a disproof for that formula (I am not sure how that is written normally, maybe for instance as vwo (A) = 0, but it is) written as A,−0 in the tableaux, meaning that it is not the case that there is a proof for A in world 0. (If there were a disproof, and thus if  vwo (⇁A) = 1, then I think still you would thereby have vwo (A) = 0).) (6.4.2) We form the initial list of our tableaux by setting all premises to true in world 0, thus as: B,+0. And the conclusion is set to false in world 0, thus as: A,−0. (6.4.3) We close a branch on our tableau when we obtain a contradiction, that is, “just when we have nodes of the form A,+i and A,−i. (6.4.4) Priest then gives the tableaux rules (see below. The list includes the accessibility rules from the next section also).

 

Conjunction Development, True (D,+)

A ∧ B,+i

A,+i

B,+i

 

 Conjunction

Development, False (D,)

A ∧ B,−i

↙   ↘

A,i      B,i

 

 Disjunction

Development, True (∨D,+)

A ∨ B,+i

↙   ↘

A,+i      B,+i

 

Disjunction

Development, False (D,)

A ∨ B,−i

¬A,−i

¬B,−i

 

 Conditional

Development, True (⊐D,+)

A B,+i

irj

↙   ↘

A,j        B,+j

applied for every j on the branch

 

Conditional

Development, False (⊐D,)

A B,−i

irj

A,+j

B,j

the j is new

 

Negation

Development, True (⇁D,+)

A,+i

irj

A,j

applied for every j on the branch

 

Negation

Development, False (⇁D,)

A,i

irj

A,+j

the j is new

 

Heredity, True  (hD,+)

p,+i

irj

p,+j

.

p is any propositional parameter, applied to every j (distinct from i)

(modified from p.108, section 6.4.4)

 

ρ, Reflexivity (ρrD)

ρ

.

iri

 

τ, Transitivity (τrD)

τ

irj

jrk

irk

 

Priest has us “Note that, in particular, we can never ‘tick off’ any node of the form A B,+i or A,+i, since we may have to come back and reapply the rule if anything of the form irj turns up” (108-109). (6.4.5) We also have the ρ reflexivity and τ transitivity accessibility rules (shown in the listing above). (6.4.6) Priest next gives an example tableau to show that ⊢I p ⇁⇁p. (6.4.7) Priest next gives another example tableau that shows that p q I ⇁p q. (6.4.8) “Counter-models are read off from an open branch of a tableau in a natural way. The worlds and accessibility relation are as the branch of the tableau specifies. If a node of the form p,+i occurs on the branch, p is set to true at wi ; otherwise, p is false at wi . (In particular, if a node of the form p,−i occurs on the branch, p is set to false at wi )” (110). (6.4.9) Priest then gives a more visual portrayal of the counter-model from above section 6.4.8. Here “We indicate the fact that p is true (at a world) by +p, and the fact that it is false by −p” (110). (6.4.10) “The tableaux are sound and complete with respect to the semantics” (111). (6.4.11) Priest then gives an example of an infinite open tableau. (6.4.12) Priest then shows how it is easier to directly make a counter-model in cases of infinite tableaux.

 

 

 

 

ch.7

Many-valued Logics

 

7.1

Introduction

 

Many-valued logics have more than two truth values. We will examine the semantics of propositional many-valued logics in this chapter along with other philosophical and logical issues related to many-valuedness.

 

 

7.2

Many-valued Logic: The General Structure

 

In providing the general structure for many-valued logics, we first simplify our system by defining material equivalence in the following way:

A B is defined as (A B) ∧ (B A)

We will articulate the structure of many-valued logics by naming all the components, including the parts relevant for truth and validity evaluations. In its most condensed form, the structure of many-valued logics is:

V, D, {fc; c C}⟩

V is the set of assignable truth values. D is the set of designated values, which are those that are preserved in valid inferences (like 1 for classical bivalent logic).  C is the set of connectives. c is some particular connective. And fc is the truth function corresponding to some connective, and it operates on the truth values of the formula in question. In a classical bivalent logic,

V = {1, 0}

D = {1}

C = {¬, ∧, ∨, ⊃, ≡} (but recall we have redefined ≡)

fc; c C = {f¬, f, f, f}

We also have an interpretation function v that assigns values to the propositional parameters, and the connective truth functions operate recursively on the assigned propositional parameter values to compute the values of the complex formulas. The connective truth functions are defined in terms of the series of values for the places in the n-tuple corresponding to that connective:

if c is an n-place connective,

v(c(A1, ... , An)) = fc(v(A1), ... , v(An))

For example, we could consider a classical bivalent system where V = {1, 0}, and we could define the connective functions for negation and conjunction in the following way.

 f¬ is a one-place function such that f¬(0) = 1 and f¬(1) = 0;

f is a two-place function such that f(x, y) = 1 if x = y = 1, and f(x, y) = 0 otherwise [...]

 

f¬  
1 0
0 1

 

f 1 0
1 1 0
0 o o

(120-121)

 

The connective evaluations are done recursively. We substitute the connective truth functions in for the connectives themselves by working from greatest to least scope. For example:

v(¬(pq)) = f¬(v(p q)) = f¬(f(v(p), v(q)))

Consider the following value assignments for the above formula:

v(p) = 1 and v(q) = 0

Using our connective truth function definitions from above, we would recursively evaluate by going from least to greatest scope, so:

v(¬(p q)) = f¬(f(1, 0)) = f¬(0) = 1

Semantic entailment, validity, and tautology (logical truth) are defined using D, the set of designated values. A set of formulas semantically entails some conclusion when there is no interpretation that assigns designated values to the premises while not assigning a designated value to the conclusion.

Σ ⊨ A iff there is no interpretation, v, such that for all B ∈ Σ, v(B) ∈ D, but v(A) ∉ D

Thus a valid inference is one where there is no interpretation in which all the premises have designated values but the conclusion does not. A formula is a logical truth (tautology) when every evaluation assigns it a designated value.

A is a logical truth iff φ A, i.e., iff for every interpretation v(A) ∈ D

In order to craft a many-valued system of our choosing, we can modify the components of this structure. We of course will want to expand V to include three or more possible assignments for truth-value. We might also want to restructure validity by adding designated values. Additionally, we could change the types of connectives or alter the evaluations for their truth functions. We say that a logic is finitely many-valued when V has a finite number of values in it; and when V has n members, we say that it is an n-valued logic. We can evaluate an argument for validity by computing the values for the premises and conclusions for every possible set of assignments for the propositional parameters. When there is an interpretation where all the premises have a designated value but the conclusion does not, then it is invalid, and valid otherwise. The number of possible sets of assignments can become unmanageable for such validity evaluations, because they increase exponentially with each additional propositional parameter.

if there are m propositional parameters employed in an inference, and n truth values, there are nm possible cases to consider.

(122)

 

 

7.3

The 3-valued Logics of Kleene and Łukasiewicz

 

The structure of many-valued logics can be formulated as:

V, D, {fc; c C}⟩

V is the set of assignable truth values. D is the set of designated values, which are those that are preserved in valid inferences (like 1 for classical bivalent logic).  C is the set of connectives. c is some particular connective. And fc is the truth function corresponding to some connective, and it operates on the truth values of the formula in question. In classical logic: the assignable truth values of V are true and false, or 1 and 0; the designated values are just 1, and the connectives are: f¬, f, f, f. [A B we are defining as (A B) ∧ (B A)] And finally, the connective functions operate on truth values in accordance with certain rules (displayed often as the truth tables for connectives that we are familiar with. [This was covered in the previous section.] K3 (strong Kleene three-valued logic) and Ł3 are two sorts of three-valued logics. Both keep D as {1}, and both extend V to {1, i, 0}. 1 is true , 0 is false, and i is neither true nor false. We have the same connectives (excluding for simplicity the biconditional), and the connective functions associated with them are defined in the following way. K3 in particular has these assignments for the connective functions:

 

f¬  
1 0
i i
0 1

 

f 1 i 0
1 1 i 0
i i i 0
0 0 0 0

 

f 1 i 0
1 1 1 1
i 1 i i
0 1 i 0

 

f 1 i 0
1 1 i 0
i 1 i i
0 1 1 1

 

One problem with K3 is that every formula can obtain the undesignated value i by assigning all of its propositional parameters the value of i. (Simply look at the tables above where the input values are i. You will see in all cases the output is i too.) This means there are no logical truths in K3. One remedy for making the law of identity a logical truth is by changing the value assignment for the conditional such that when both antecedent and consequent are i, the whole conditional is 1.

 

f 1 i 0
1 1 i 0
i 1 1 i
0 1 1 1

 

This new system, where everything else is identical to K3except for the above alternate valuation for the conditional, is called Ł3 (Łukasiewicz’ three-valued logic).

 

 

7.4

LP and RM3 

 

Last time we examined K3, which was defined as:

V = {1, i, 0}

D = {1}

fc; c C = {f¬, f, f, f}

(note, A B we are defining as (A B) ∧ (B A) )

f¬  
1 0
i i
0 1

 

f 1 i 0
1 1 i 0
i i i 0
0 0 0 0

 

f 1 i 0
1 1 1 1
i 1 i i
0 1 i 0

 

f 1 i 0
1 1 i 0
i 1 i i
0 1 1 1

 

Here i means neither true nor false. LP has the same structure as K3, except in LP we have D = {1, i}. And in LP, the 1 is understood to mean true and true only, 0 to mean false and false only, and i to mean both true and false. The connective functions then follow our intuition regarding this alternate sense for the V values. Suppose for AB that A is 1 and B is i. Since B is at least true, then AB is at least true. And since B is also at least false, AB is also at least false. So AB is both true and false, or i, which is what the truth tables calculate it to be. There are two notable advantages of LP over K3. {1} In LP, unlike in K3, the law of excluded middle holds:

Kp ∨ ¬p

LP  p ∨ ¬p

And the principle of explosion, or the inference rule ex falso quodlibet, are not valid in LP, unlike in K3:

p ∧ ¬pK3 q

p ∧ ¬pLP  q

But there is one disadvantage of LP compared with K3. In LP, modus ponens is not valid:

p, p qK3 q

p, p qLP  q

This can be solved by changing the evaluation for the conditional connective in the following way, thereby creating RM3:

 

f 1 i 0
1 1 0 0
i 1 i o
0 1 1 1

 

 

 

 

 

7.5

Many-valued Logics and Conditionals

 

(7.5.1) We will now examine the conditional operator in many-valued logics. (7.5.2) We will assess whether or not some problematic inferences using conditionals are valid in K3, Ł3, LP3, and RM3, by making a table. (In the table below, a ‘✓’ means the inference or formula is valid in the given system, and an ‘×’ means it is not valid.)

 

 

 

K3

Ł3

LP

RM3

1

q ⊨ p ⊃ q

×

2

¬p p ⊃ q

×

3

(p ∧ q) ⊃ r (p ⊃ r) ∨ (q ⊃ r)

4

(p ⊃ q) ∧ (r ⊃ s) (p ⊃ s) ∨ (r ⊃ q)

5

¬(p ⊃ q) p

6

p ⊃ r (p ∧ q) ⊃ r

7

p ⊃ q, q ⊃ r p ⊃ r

×

8

p ⊃ q ¬q ⊃ ¬p

9

  p ⊃ (q ∨ ¬q)

×

×

×

10

  (p ∧ ¬p) ⊃ q

×

×

×

 

(7.5.3) Generally speaking, the many-valued logics still validate many of the problematic inferences using the conditional. (7.5.4) We have the intuitions that in finitely many-valued logics, the following two things should hold:

(i) if A (or B) is designated, so is A B

(ii) if A and B have the same value, A B must be designated (since A A is).

Only in K3 does (ii) not hold. (7.5.5) Given these two rules, suppose we have a many-valued logic with one more formula than there are truth-values; that means a disjunction of all of its biconditionals will need to be logically valid, because at least one of them will have to have both biconditional terms with the same value and thus be designated. (7.5.6) But there are counter-examples to this claim (and in these counter-examples, the intuitive sense of the sentences does not allow for any true biconditional combinations of two different sentences, even though technically they should evaluate as true). For instance, “Consider n + 1 propositions such as ‘John has 1 hair on his head’, ‘John has 2 hairs on his head’, . . ., ‘John has n + 1 hairs on his head’. Any biconditional relating a pair of these would appear to be false. Hence, the disjunction of all such pairs would also appear to be false – certainly not logically true” (127). (So suppose we have a three-valued logic, and John has 1 hair on his head. That means “John has 2 hairs on his head if and only if John has 3 hairs on his head” is true (or at least true, or ‘designated’ whatever way), on account of both sides being false, even though the intuitive sense of the formulation would make the biconditional false (or at least senseless); for, John’s having x number of hairs on his head should not be conditional on his having x ± 1 hairs on his head. Thus, finitely many-valued logics will always be potentially vulnerable to the following problem: because the disjunction of all biconditionals should be true, then at least one must be true, meaning that in the case of propositions like “John has x number of hairs”, there must be at least one true one that reads “John has x number of hairs on his head only if John has x + 1 number of hairs on his head.” But that is senseless even though it would be evaluated as true.)

 

 

 

 

 

7.6

Truth-value Gluts: Inconsistent Laws

 

(7.6.1) We will now examine philosophical motivations for advocating for multi-valued logics with truth-value gaps or gluts. (7.6.2) In this chapter subsection, Priest will elaborate on the issue of inconsistent laws. (7.6.3) For example, consider if long ago there were the laws {1} that no aborigines have the right to vote, but {2} all property-holders have that right. At the time it was unthinkable for aborigines to own property, but later in history they do. Thus in the legal system, later on in history, aborigines both have and do not have the right to vote. (7.6.4) In cases of insistent laws, normally they are rectified to make them consistent. Nonetheless, they will remain inconsistent for some time until that change will be made. (7.6.5) Priest next considers a possible objection, namely, that such seemingly contradictory laws are actually consistent, because there is always some other law that clarifies which of the two contradicting laws takes precedent; “for example lex posterior (that a later law takes precedence over an earlier law), or that constitutional law takes precedence over statute law, which takes precedence over case law. One might insist that all contradictions are only apparent” (128). (7.6.6) Priest’s reply to this objection is that while it may be that in actual fact there are many cases where additional laws dissolve the apparent legal contradiction, in principle it is still possible, as for example were both laws made at the same rank.

 

 

 

 

7.7

Truth-value Gluts: Paradoxes of Self-reference

 

(7.7.1) We will now consider paradoxes of self-reference as motivation for advocating for truth-value gluts. (7.7.2) One paradox of self-reference is the liar’s paradox. For example, ‘this sentence is false’. “Suppose that it is true. Then what it says is the case. Hence it is false. Suppose, on the other hand, that it is false. That is just what it says, so it is true. In either case – one of which must obtain by the law of excluded middle – it is both true and false” (129). (7.7.3) Another paradox of self-reference is Russell’s Paradox: “Consider the set of all those sets which are not members of themselves, {x; xx}. Call this r. If r is a member of itself, then it is one of the sets that is not a member of itself, so r is not a member of itself. On the other hand, if r is not a member of itself, then it is one of the sets in r, and hence it is a member of itself. In either case – one of which must obtain by the law of excluded middle – it is both true and false. “ (7.7.4) There are many such arguments that come to a conclusion of the form A∧¬A, and supposing they are sound, that makes the conclusions true and thus means there really are truth-value gluts. (7.7.5) We will now examine briefly a couple claims that these paradoxical arguments are not sound. (7.7.6) Objection 1: All self-referential sentences are meaningless. Reply 1: But, there are many such meaningful sentences, like, ‘this sentence has five words’. (7.7.7) Objection 2: The liar sentence is neither true nor false. Thus our logical assumptions remove excluded middle, and we cannot develop the argument as, “either it is true or false; if false, then thus; if true then false; thus ...”. For, now we have a third situation, that it is neither. (7.7.8) Reply 2: “Extended Paradoxes” still present a contradiction. For example: “This sentence is either false or neither true nor false”. If true, it is either false or neither value. Either way, it is not true, which contradicts our assumption that it is true. If it is either false or neither valued (meaning that it is not true), then its value is what it claims to be, and thus it is true, which contradicts what we assumed. Reply 3: Some paradoxes of self-reference, like Berry’s paradox, do not invoke the law of excluded middle.

 

 

7.8

Truth-value Gaps: Denotation Failure

 

(7.8.1) One motivation for arguing for truth-value gaps are intuitionistic situations where neither A nor ¬A can be verified. We discussed intuitionism previously, so we turn instead to two other arguments for gaps. (7.8.2) The first sort of argument for truth-value gaps are “sentences that contain noun phrases that do not appear to refer to anything, like names such as ‘Sherlock Holmes’, and descriptions such as ‘the largest integer’ (there is no largest)” (130). (7.8.3) Frege claimed that “all sentences containing such terms are neither true nor false;” but this is too strong of a claim, because we would want for example for the following sentence to be true: “Sherlock Homes does not really exist” even though it has a non-denoting term. (7.8.4) But there are sorts of sentences with non-denoting terms, called “truths of fiction,” that would seem to really be true, false, or neither on account of the fictional world they are statements about. For example, “Holmes lived in Baker Street” would be true, because that is where the author Conan Doyle says Homes lives; “Holmes’ friend, Watson, was a lawyer,” would be false, because Doyle says that Watson was a doctor, and “Holmes had three maiden aunts” would be neither true nor false, because Doyle never says anything about Holmes’ aunts or uncles. (7.8.5) But some say that fictional truth sentences are really shorthand for sentences beginning with “In the play/novel/film (etc.), it is the case that...”. So, “in Doyle’s stories, it is the case that Holmes lived in Baker Street;” “in Doyle’s stories, it is not the case that Watson was a lawyer;” and “in Doyle’s stories, it is not the case that Holmes had three maiden aunts, and it is not the case that he did not” (thereby making all such sentences true.) (7.8.6) “Another sort of example of a sentence that can plausibly be seen as neither true nor false is a subject/predicate sentence containing a non-denoting description, like ‘the greatest integer is even’” (131). (7.8.7) But it is not necessary to say that non-denoting descriptions are neither true nor false, because they fulfill this function when being just false. (7.8.8) And in fact, in many cases non-denoting descriptions would work better being simply false. For example, let “Father Christmas” be “the old man with a white beard who comes down the chimney at Christmas bringing presents,” and thus the following is simply false: “The Greeks worshipped Father Christmas.” (7.8.9) Nonetheless, even Russell’s view that non-denoting descriptions are false does not help for cases when we would say they should be true; “For example, it appears to be true that the Greeks worshipped the gods who lived on Mount Olympus” (131-132). (7.8.10) So although we have reason to pursue non-denotation as a motivation for truth-value gaps, we see that it is problematic.

 

 

7.9

Truth-value Gaps: Future Contingents

 

(7.9.1) Another motivation for holding that there are truth-value gaps are future contingents, which are statements about the future that can be uttered now but for which there presently are no facts that make them true or false, as for example: “The first pope in the twenty-second century will be Chinese” (132). (7.9.2) So some might claim that future contingents are really either true or false, and we do not know which yet. However, we will now examine Aristotle’s argument that this cannot be so. (7.9.3) Aristotle’s argument for future contingents is that they cannot be either true or false, because that would mean the futures they describe are certain and necessary, while in fact they are not. Take sentence S: “The first pope in the twenty-second century will be Chinese.” Suppose it is true. That means the first pope in the twenty-second century will in fact be Chinese. But that cannot be necessarily true, because we do not know yet. Suppose instead S is false. Then that pope will not be Chinese, necessarily, which also cannot be correct for the same reason. Either way, the future outcome would need to be necessary, but it is not, because for right now it is still contingent. Thus future contingents cannot be either true or false. (7.9.4) Objection: Aristotle’s argument, which can be illustrated as: “If S were true now, then it would necessarily be the case that the first pope in the twenty-second century will be Chinese” is ambiguous between □(AB) (“‘if it is true now that the first pope in the twenty-second century will be Chinese, then it necessarily follows that the first pope in the twenty-second century will necessarily be Chinese”) or A ⊃ □B (“if it is true now that the first pope in the twenty-second century will be Chinese, then that the first pope in the twenty-second century will be Chinese is true of necessity.”) (7.9.5) If we take the first interpretation, □(AB), then it would be true (because in a world, if something about the future is true now, then it cannot be otherwise that it will be false in the future of that world), but we cannot from A, □(AB)  infer that □B (because A or the conditional might not hold in other worlds and thus B may not be true in all other worlds). If we take the second interpretation, (A ⊃ □B) then we can from A, (A ⊃ □B) validly infer □B (by modus ponens), but we would not feel that it is justified to say (A ⊃ □B) in the first place (because we do not want to imply that B will happen no matter what anyway, fatalistically, regardless of A.) (My parenthetical explanations are faulty here and will be revised after the elaborations in section 11a.7.)

 

 

7.10

Supervaluations, Modality and Many-valued Logic

 

(7.10.1) We turn now to two matters that are related to Aristotle’s argument for truth-value gaps on the basis of future contingents. (7.10.2) We will probably not want all statements about the future to be valueless, as many statements can be determinable now as true or false. Thus we need excluded middle to hold in many cases for statements about the future. And since it does not hold in “K3 or Ł3, these logics do not appear to be the appropriate ones for future statements” (133). (7.10.3) We can use a technique called supervaluation to produce a logic that is better suited to accommodate both valued and valueless statements about the future. “Let v be any K3 interpretation. Define vv′ to mean that v′ is a classical interpretation that is the same as v, except that wherever v(p) is i, v′(p) is either 0 or 1. (So v′ ‘fills in all the gaps’ in v.) Call v′ a resolution of v. Define the supervaluation of v, v+, to be the map such that for every formula, A

v+(A) = 1 iff for all v′ such that vv′, v′(A) = 1

v+(A) = 0 iff for all v′ such that vv′, v′(A) = 0

v+(A) = i otherwise

The thought here is that A is true on the supervaluation of v; just in case however its gaps were to get resolved (and, in the case of future contingents, will get resolved), it would come out true. We can now define a notion of validity as something like ‘truth preservation come what may’, Σ ⊨S A (supervalidity), as follows:

Σ ⊨S A iff for every v, if v+(B) is designated for all B ∈ Σ ⊨S, v+(A) is designated

(where the designated values here are as for K3),” namely, just 1 (pp.133-134). (7.10.4) “A fundamental fact is that Σ ⊨S A iff A is a classical consequence of Σ. (In particular, therefore, ⊨S A ∨ ¬A even though A may be neither true nor false!)” (134). (7.10.5) Classical validity and supervaluational validity hold when conclusions are understood to be a singular formula, but it does not hold for multiple-conclusion validity. For instance,

A B A, B

is classically valid but not supervaluationally valid. (7.10.5a) Priest next shows how we can avoid the misalignment of classical and supervaluational validity for multiple conclusions by redefining supervaluational validity in the following way: “Define an inference to be valid iff, for every K3 interpretation, v, every resolution of v that makes every premise true makes some (or, in the single conclusion case, the) conclusion true. Since the class of resolutions of all K3 interpretations is exactly the set of classical evaluations, this gives exactly classical logic (single or multiple conclusion, as appropriate)” (134-135). (7.10.5b) To give an LP logic corresponding to the K3 logic from supervaluation, we use a technique called subvaluation: “we will use ⊨S instead of ⊨S (and call this subvalidity). This time, A S Σ iff the multiple conclusion inference from A to Σ is classically valid (and a fortiori for single conclusion inferences)” (135). (7.10.5c) Priest next notes that the above subvaluational technique of LP does not work for multi-premise inferences. For example, A, B A B is classically valid but not subvaluationally valid. (7.10.5d) The different super/sub-valuational techniques render different notions of validity, and so we need to ask, “In the case of future contingents, for example, are we interested in preserving actual truth value, truth value we can ‘predict now’, or ‘eventual’ truth value?” (136) Priest notes that our answer can depend on why we think that gaps or gluts arise in such situations and on the sort of application we have in mind. (7.10.6) For Łukasiewicz, a statement about a future contingent says something that possibly may happen, but it can be otherwise. Thus that statement of the future contingent with the possibility operator is true but with the necessity operator is false.

 

f◊
1 1
i 1
0 0

 

Defining □A in the standard way, as ¬◊¬A, gives it the truth table:

 

f□
1 1
i 0
0 0

(136)

 

(7.10.7) The above definitions for the modal operators give us a modal logic that captures some of Aristotle’s thinking on future contingency, like p Ł3p, but it betrays others, like ◊A, ◊BŁ3 ◊(AB). (7.10.8) “[N]one of the modal logics that we have looked at (nor conditional logics, nor intuitionist logic) is a finitely many-valued logic” (137). (7.10.9) By using uniform substitution, we can render every logic into an infinitely many-valued logic. “A uniform substitution of a set of formulas is the result of replacing each propositional parameter uniformly with some formula or other (maybe itself). Thus, for example, a uniform substitution of the set {p, p ⊃ (p q)} is {rs, (rs) ⊃ ((rs) ∨ q)}. A logic is closed under uniform substitution when any inference that is valid is also valid for every uniform substitution of the premises and conclusion. All standard logics are closed under uniform substitution” (137). (7.10.10) “[E]very logical consequence relation, ⊢, closed under uniform substitution, is weakly complete with respect to a many-valued semantics. That is, A iff A is logically valid in the semantics” (137).




 

ch.8

First Degree Entailment

 

8.1

Introduction

 

In first degree entailment (FDE), interpretations are not formulated as functions that assign truth values, standard or not, to propositional parameters. Rather, in FDE, interpretations are formulated as relations  between formulas and standard truth values. In this chapter we examine FDE, along with an alternate possible world semantics for it, and we discuss the issues of explosion and disjunctive syllogism.

 

 

8.2

The Semantics of FDE

 

In our semantics for First Degree Entailment (FDE), our only connectives are ∧, ∨ and ¬ (with A ⊃ B being defined as ¬A ∨ B.) FDE uses relations rather than functions to evaluate truth. So a truth-valuing interpretation in FDE is a relation ρ between propositional parameters and the values 1 and 0. We write 1 for p relates to 1, and 0 for p relates to 0. This allows a formula to have one of the following four value-assignment situations: just true (1, e.g.: 1), just false (0, e.g.: 0), both true and false (1 and 0, e.g.: 1, o), and neither true nor false (no such valuing formulations). In FDE, being false (that is, relating to 0) does not automatically mean being untrue (that is, not relating to 1), because it can still be related to 1 along with 0. For formulas built up with connectives, we use the same criteria as in classical logic to evaluate them, only here we can have formulas taking both values. In FDE, semantic consequence is defined as:

Σ ⊨ A iff for every interpretation, ρ, if 1 for all B ∈ Σ then 1

(144)

and logical truth or tautology as:

A iff φA, i.e., for all ρ, 1

(144)

 

 

8.3

Tableaux for FDE

 

The tableaux for First Degree Entailment will allow us to evaluate arguments for validity. The rules are modeled after those of classical logic, but are made more complex with the inclusion of a new addition to the formulations, namely, after each formula, we add a comma and then + for true ones and − for false ones, like:

p,+

and

¬p,−

We set up the tableaux to test for valid inference by first stacking the premises and lastly the negated conclusion. We then use the tableau rules to develop the branches until we can determine them as being either open or closed.

 

 Double Negation

Development, True (¬¬D,+)

¬¬A,+

A,+

 

 Double Negation

Development, False (¬¬D,−)

¬¬A,−

A,−

 

Conjunction

Development, True (D,+)

A ∧ B,+

A,+

B,+

 

Conjunction

Development, False (D,−)

A ∧ B,

↙   ↘

A,−      B,−

 

 Negated Conjunction

Development, True (¬D,+)

¬(A ∧ B),+

¬A ¬B,+

 

 Negated Conjunction

Development, False (¬D,−)

¬(A ∧ B),

¬A ¬B,

 

 Disjunction

Development, True (∨D,+)

A ∨ B,+

↙   ↘

A,+      B,+

 

 Disjunction

Development, False (∨D,−)

  A B,-

A,-

B,-

 

 Negated Disjunction

Development, True (¬D, +)

¬(A ∨ B),+

¬A ¬B,+

 

 Negated Disjunction

Development, False(¬D, -)

¬(A ∨ B),-

¬A ¬B,-

 

Branches close when they contain nodes of the form A,+ and A,−. Open branches indicate counter-models: to any p,+, we assign 1; and to any ¬p,+ we assign 0. And we make no other assignments than that. This technique will make the premises true and the conclusion not true, when it is an invalid inference. The tableaux are sound and complete.

 

 

8.4

FDE and Many-valued Logics

 

First Degree Entailment’s evaluating relation allows for four value situations, namely, a formula being valued as 1, as 0, as having a relation to 1 and to 0, and as having no such value relations. We can thus alternatively think of First Degree Entailment as having four singular values each as their own: 1 (for just true), 0 (for just false), b (for both), and n (for neither). The value assignments are then structured so that they match the outcomes for the First Degree Entailment rules. The truth tables for the connectives in four-valued FDE semantics are thus:

 

f¬  
1 0
b b
n n
o 1

 

f 1 b n o
1 1 b n 0
b b b 0 0
n n 0 n 0
o 0 0 0 0

 

f 1 b n o
1 1 1 1 1
b 1 b 1 b
n 1 1 n n
o 1 b n 0

 

We can make a short-hand of these valuations with the following diamond lattice (Hasse diagram):

 

1

↗               ↖

b                                 n

↖                   ↗

0

 

[Here is a version that I modified for my own purposes of visual demonstration, so it is not Priest’s, and it is probably flawed:

 

1

↙↗       

         ↖↘

b           

|

             n

        ↘↖        

         ↗↙

0

 

Negation toggles 0 and 1, and it maps n to itself and b to itself:

 

Negation and

1

↙↗       

        ↖↘

b           

|

             n

        ↘↖        

         ↗↙

0

 

For conjunction, we take the greatest lower bound for both of the conjunct values, that is to say, when moving upward, we find the highest place we can start from in order to arrive at both of the two conjunct values (we can also start and arrive at the same place, if we begin at our destination).

 

Conjunction >

(greatest lower bound, reading upwards)

1

       

        

b           

|

             n

                

        

0

 

For disjunction, we look for the least upper bound: when moving downward, we seek the lowest place we can start from to arrive at both values.

 

Disjunction ↓ <

(least upper bound, reading downwards)

1

       

        

b           

|

             n

                

        

0

 

] The designated values in FDE are 1 and b, so an inference in FDE is valid only if there is no interpretation that assigns all the premises 1 or b and the conclusion 0 or n. We can place constraints on FDE in order to obtain other logics, like the three-valued logics K3 and LP and also Classical Logic. One such constraint is exclusion, which prevents there from being a formula that is both 1 and 0:

Exclusion: for no p, 1 and 0

FDE under the constraint of exclusion is K3. Now, to make K3 sound and complete, we use modified FDE tableaux rules, namely, we can (additionally) also close a branch when it contains A,+ and ¬A,+. Another constraint is exhaustion, which prevents there from being any formulas with no values:

Exhaustion: for all p, either 1 or 0

FDE under the exhaustion constraint is LP. To make LP sound and complete, we modify the FDE tableaux rules such that {1}  a branch also closes if it has formulas of the form A,− and ¬A, −, and {2} we obtain counter-models from open branches by using the following rule: “if p,− is not on the branch (and so, in particular, if p,+ is), set 1; and if ¬p,− is not on the branch (and so, in particular, if ¬p,+ is), set 0” (149). And then, FDE under both the exhaustion and exclusion constraints is Classical Logic. Note that FDE is a  proper sub-logic of K3 and LP, because every interpretation of K3 or LP is an interpretation in FDE .

 

 

8.5

The Routley Star

 

FDE can be given an equivalent, two-valued, possible world semantics in which the negation is an intensional operator, meaning that it is defined by means of related possible worlds. In this case, we use Routley’s star worlds. We have a star function, *, which maps a world to is “star” or “reverse” world (and back again to the first one, if applied yet another time. That bringing back to the first is what defines the function). So for any world w, the * function gives us its companion star world w. We evaluate conjunctions and disjunctions based on values in the given world. But what is notable in Routley Star semantics is that a negated formula in a world w is valued true in that world not on the basis of it unnegated form being false in that world w, but rather on the basis of its unnegated form being false in the star world w (so a negated formula is 1 in w if its unnegated form is 0 in w*.) Validity is defined as truth preservation for all worlds and interpretations. For constructing tableaux in Routley Star logic, we designate not just the truth-value for a formula but also the world in which that formula has that value. This is especially important for negation, where the derived formulas are found in the companion star world. Here is how the tableau rules work for Routley Star logic [quoting Priest, except for the rules tables, where I add my own names and abbreviations, following David Agler]:

Nodes are now of the form A,+x or A,−x, where x is either i or i#, i being a natural number. (In fact, i will always be 0, but we set things up in a slightly more general way for reasons to do with later chapters.) Intuitively, i# represents the star world of i. Closure occurs if we have a pair of the form A,+x and A,−x. The initial list comprises a node B,+0 for every premise, B, and A,−0, where A is the conclusion. The tableau rules are as follows, where x is either i or i#, and whichever of these it is, is the other.

 

Conjunction

Development, True (D,+x)

AB,+x

A,+x

B,+x

 

Conjunction

Development, False (D,−x)

AB,−x

↙         ↘

A,−x              B,−x

 

 Disjunction

Development, True (∨D,+x)

A ∨ B,+x

↙               ↘

A,+x      B,+x

 

 Disjunction

Development, False (∨D,−x)

A ∨ B,-x

A,-x

B,-x

 

 Negation

Development, True (¬D,+x)

¬A,+x

A,-

 

 Negation

Development, False (¬D,−x)

¬A,-x

A,+

(152, Note, names and abbreviations are my own and are not in the text.)

 

We test for validity first [as noted above] by setting every premise to true in the non-star world and the conclusion to false in the non-star world. We then apply all the rules possible, and if all the branches are closed [recall from above that closure occurs if we have a pair of the form A,+x and A,−x] then it is valid, and invalid otherwise [so it is invalid if any branches are open]. We then can make counter-models using completed open branches. On the basis of the world indicators in the branches, we assign to the formulas the values indicated by the true (+) and false (−) signs for the respective world (that is to say, “ if p,+x occurs on the branch, vwx(p) = 1, and if p,−x occurs on the branch, vwx(p) = 0.”) The equivalence between Routley Star semantics and FDE becomes apparent when we make the following translation: vw(p) = 1 iff 1; vw(p) = 0 iff 0.

 

 

8.6

Paraconsistency and the Disjunctive Syllogism

 

(8.6.1) On account of truth-value gluts, p ∧ ¬p q is not valid in FDE, and thus FDE does not suffer from explosion (which happens when contradictions entail any arbitrary formula and thus a contradiction entails everything). (8.6.2) Both FDE  and LP  are paraconsistent logics, because in them it is invalid to infer any arbitrary formula from a contradiction. (8.6.3) Disjunctive syllogism (p, ¬p q q) fails in FDE (set p to b and q to 0; b is a designated value but is not preserved), and it fails in LP (set p to i and q to 0.; i is a designated value, but it also is not preserved ). (8.6.4) Arguments for the material and strict conditional that use disjunctive syllogism are thus faulty on account of its invalidity (in FDE and LP). (8.6.5) Because disjunctive syllogism fails for the material conditional in FDE, so too does modus ponens fail for it as well, given their equivalence. This suggests that the material conditional does not adequately represent the real conditional. (8.6.6) Those who argue that disjunctive syllogism is intuitively valid can do so only by showing that truth-value gluts are invalid. They think that by saying one of two disjuncts is false (in a true disjunction) necessities the other disjunct be true. But we can also have the intuition that certain formulas should be both true and false. And suppose one of the disjuncts is ¬p, and suppose that p is both true and false. That does not necessitate that ¬p be just false; for, it would also be both true and false. In other words: “The truth of p does not rule out the truth of ¬p: both may hold” (154). Since ¬p is at least true, it does not necessitate that the other disjunct be true, and so we cannot infer that the other disjunct is true. For, only one needs to be at least true. So if we start with the intuition that there can be truth-value gluts, then disjunctive syllogism is intuitively invalid. (8.6.7) A more convincing defense of the disjunctive syllogism is that we rely on it for reasoning well. Often times we know either of two things can be true; when one proves false, we know it must be the other one. (8.6.8) Even though disjunctive syllogism is invalid, it still functions quite well for normal everyday reasoning. It only fails when there is a truth-value glut. Otherwise, our daily life presents us normally with consistencies, so it will still deliver correct inferences usually. We just need to be careful to distinguish those cases with gluts and remember not to use it then. (8.6.9) There is precedent for this sort of discrimination of situations for appropriate inference uses in mathematics, so we should not feel too uncomfortable with it in cases of logical reasoning. For example, when dealing with finite sets, if one set is a  proper subset of another, we can infer that it is smaller. But for infinite sets, we cannot draw that inference. For example, the set of even numbers is a proper subset of the set of natural numbers, but both sets have the same size. (8.6.10) Since we are wiling to accept inference discrimination in mathematics, we can surely accept it in logic, and so we can set aside the objection that we must reject truth-value gluts (or that we need the material conditional) simply because we need disjunctive syllogism to reason properly.






ch.9

Logics with Gaps, Gluts and Worlds

 

9.1

Introduction

 

(9.1.1) In this chapter we will examine ways that we can combine the techniques of both modal logic and many-valued logic, especially with logics that involve strict conditional world semantics and First Degree Entailment. (9.1.2) We will also further elaborate on the notion of non-normal worlds. (9.1.3) At the end of this chapter we will examine logics of constructible negation and connexive logics.

 

 

9.2

Adding →

 

(9.2.1) In order to introduce a well-functioning conditional into FDE, we could build a possible world semantics upon it. “To effect this, let us add a new binary connective, →, to the language of FDE to represent the conditional. By analogy with, a relational | interpretation for such a language is a pair ⟨W, ρ⟩, where W is a set of worlds, and for every w W, ρw is a relation between propositional parameters and the values 1 and 0” (163-164). (9.2.2) We will use the symbol → for the conditional operator in our possible worlds FDE semantics. We still use the ρ relation to assign truth-values. But we also will specify the worlds in which that value holds. (9.2.3) The evaluation rules for ∧, ∨ and ¬ and just like those for FDE, only now with worlds specified.

A Bρw1 iff Aρw1 and Bρw1

A Bρw0 iff Aρw0 or Bρw0

(164)

Aw1 iff w1 or w1

Aw0 iff w0 and w0

¬w1 iff w0

¬w0 iff w1

(not in the text)

(9.2.4) In our possible worlds FDE, a conditional is true if in all worlds, whenever the antecedent is true, so is the consequent. And it is false if there is at least one world where the antecedent is true and the consequent false.

A Bρw1 iff for all w′ ∈ W such that Aρw1, Bρw1

A Bρw0 iff for some w′ ∈ W, Aρw1 and Bρw0

(9.2.5) In our possible worlds FDE, “semantic consequence is defined in terms of truth preservation at all worlds of all interpretations:

Σ ⊨ A iff for every interpretation, ⟨W, ρ⟩, and all w W: if Bρw1 for all B ∈ Σ, Aρw1

(164)

(9.2.6) “A natural name for this logic would be 4. We will call it, more simply, K4” (164).

 

 

9.3

Tableaux for K4

 

(9.3.1) We will formulate the tableau procedures for K4 by modifying those for FDE. (9.3.2) Nodes in our possible worlds FDE tableaux take the “form A,+i or A,−i, where i is a natural number.” To test for validity, we compose our initial list by formulating our premise nodes as “B,+0” and our conclusion as “A,−0”.  “A branch closes if it contains a pair of the form A,+i and A,−i” (164). (9.3.3) The tableau rules for the extensional connectives (∧, ∨ and ¬) in K4 are the same as for FDE except “i is carried through each rule.”[Below I include the conditional rules from the next section. Note that the rules for double negation and disjunction are not in the text and are probably mistaken.]

 Double Negation

Development, True (¬¬D,+)

¬¬A,+i

A,+i

 

 Double Negation

Development, False (¬¬D,−)

¬¬A,−i

A,−i

 

Conjunction

Development, True (D,+)

A ∧ B,+i

A,+i

B,+i

 

Conjunction

Development, False (D,−)

A ∧ B,−i

↙   ↘

A,−i      B,−i

 

 Negated Conjunction

Development, True (¬D,+)

¬(A ∧ B),+i

¬A ¬B,+i

 

 Negated Conjunction

Development, False (¬D,−)

¬(A ∧ B),−i

¬A ¬B,−i

 

 Disjunction

Development, True (∨D,+)

A ∨ B,+i

↙   ↘

A,+i      B,+i

 

 Disjunction

Development, False (∨D,−)

  A B,-i

A,-i

B,-i

 

 Negated Disjunction

Development, True (¬D, +)

¬(A ∨ B),+i

¬A ¬B,+i

 

 Negated Disjunction

Development, False(¬D, -)

¬(A ∨ B),-i

¬A ¬B,-i

 

 Conditional

Development, True (→D,+)

A → B,+i

↙   ↘

A,-j      B,+j

.

j is every number that occurs on the branch

 

 Conditional

Development, False (→D,−)

  A B,-i

A,+j

B,-j

.

j is a new number

 

 Negated Conditional

Development, True (¬D, +)

¬(A B),+i

A,+j

¬B,+j

.

j is a new number

 

 Negated Conditional

Development, False(¬D, -)

¬(A B),-i

↙     ↘

A,-j     ¬B,-j

.

j is every number that occurs on the branch

(165, titles for the rules are my own additions)

 

(9.3.4) To the above rules we add those for the conditional [see the rules just above, where they were moved to.] (9.3.5) Priest then gives a tableau example showing a valid inference. (9.3.6) Priest next gives an example of an inference that is not valid. (9.3.7) We make countermodels from open branches in the following way: “There is a world wi for each i on the branch; for propositional parameters, p, if p,+i occurs on the branch, set pρwi1; if ¬p,+i occurs on the branch, set pρwi0. ρ relates no parameter to anything else” (166). (9.3.8) (It can be proven that the possible worlds FDE tableaux are sound and complete with respect to the semantics.)

 

 

9.4

Non-normal Worlds Again

 

(9.4.1) On account of the potential for truth-value gaps and gluts, the conditional in our possible worlds First Degree Entailment system  K4 does not suffer from the following paradoxes of the strict conditional: ⊨ p → (q ¬q), ⊨ (p ∧¬p) → q. (9.4.2) In K4, if ⊨ A then ⊨ B A. That means ⊨ p → (qq) is valid, because ⊨ q q is valid. (9.4.3) Even though ⊨ p → (qq) , which contains the law of identity, is valid, we can think of a paradoxical instance of this that shows how the law of identity can fail: “if every instance of the law of identity failed, then, if cows were black, cows would be black. If every instance of the law failed, then it would precisely not be the case that if cows were black, they would be black” (167). (9.4.4) As we noted, the conditional should be able to express things that go against the laws of logic, like the law of identity. We should be able to formulate sentences in which the antecedent supposes some law of logic not to hold, and then the consequent would express what sorts of things would follow from that. Non-normal worlds are ones where the normal laws of logic may fail; so we should implement non-normal worlds: “we need to countenance worlds where the laws of logic are different, and so where laws of logic, like the law of identity, may fail. This is exactly what non-normal worlds are” (167). (9.4.5) We thus need to consider non-normal worlds where the laws of logic fail and, given how the conditionals express those laws, where the conditional takes on values different than it would in normal worlds (K4). (9.4.6) At a non-normal world, A B might be able to take on any sort of value, because the laws of logic may change in that world. (9.4.7) We “take an interpretation to be a structure ⟨W, N, ρ⟩, where W is a set of worlds, N W is the set of normal worlds (so that W N is the set of non-normal worlds), and ρ does two things. For every w, ρw is a relation between propositional parameters and the truth values 1 and 0, in the usual way. But also, for every non-normal world, w, ρw is a relation between formulas of the form A B and truth values” (167). (9.4.8) The truth conditions for connectives in our non-normal worlds K4 are the same as for K4, except in non-normal worlds, the conditional is assigned its value not recursively but in advance by the ρ relation.  Here are the truth conditions for normal worlds:

A Bρw1 iff Aρw1 and Bρw1

A Bρw0 iff Aρw0 or Bρw0

(p.164, section 9.2.3)

Aw1 iff w1 or w1

Aw0 iff w0 and w0

¬w1 iff Aρw0

¬w0 iff w1

(not in the text)

A Bρw1 iff for all w′ ∈ W such that Aρw1, Bρw1

A Bρw0 iff for some w′ ∈ W, Aρw1 and Bρw0

(p.164, section 9.2.4)

(9.4.9) Our non-normal worlds FDE system will be called N4, and it will define validity in the same way as for K4, namely, as truth preservation at all normal worlds of all interpretations.

 

                                 

9.5

Tableaux for N4

(9.5.1) The tableau rules for N4 are the same as for K4, except the rules for → will apply only at world 0.

 

 Double Negation

Development, True (¬¬D,+)

¬¬A,+i

A,+i

 

 Double Negation

Development, False (¬¬D,−)

¬¬A,−i

A,−i

 

Conjunction

Development, True (D,+)

A ∧ B,+i

A,+i

B,+i

 

Conjunction

Development, False (D,−)

A ∧ B,−i

↙   ↘

A,−i      B,−i

 

 Negated Conjunction

Development, True (¬D,+)

¬(A ∧ B),+i

¬A ¬B,+i

 

 Negated Conjunction

Development, False (¬D,−)

¬(A ∧ B),−i

¬A ¬B,−i

 

 Disjunction

Development, True (∨D,+)

A ∨ B,+i

↙   ↘

A,+i      B,+i

 

 Disjunction

Development, False (∨D,−)

  A B,-i

A,-i

B,-i

 

 Negated Disjunction

Development, True (¬D, +)

¬(A ∨ B),+i

¬A ¬B,+i

 

 Negated Disjunction

Development, False(¬D, -)

¬(A ∨ B),-i

¬A ¬B,-i

 

 Conditional

Development, True (→D,+)

A → B,+i

↙   ↘

A,-j      B,+j

.

j is every number that occurs on the branch (and this rule applies only to world 0)

 

 Conditional

Development, False (→D,−)

  A B,-i

A,+j

B,-j

.

j is a new number. (Here i will always be 0 and j will be 1)

 

 Negated Conditional

Development, True (¬D, +)

¬(A B),+i

A,+j

¬B,+j

.

j is a new number. (Here i will always be 0 and j will be 1)

 

 Negated Conditional

Development, False(¬D, -)

¬(A B),-i

↙     ↘

A,-j     ¬B,-j

.

j is every number that occurs on the branch (and this rule applies only to world 0. So i will always be 0)

(165, titles for the rules are my own additions. Note that the rules for double negation and disjunction are not in the text and are probably mistaken. Also, I am guessing about the conditionals, too.)

 

(9.5.2) Priest then gives an example of a formula that is valid in N4 but not in K4. (9.5.3) We construct counter-models from open branches in the following way. There is a world wi for each i on the branch. For all propositional parameters, p, in every world (normal or not) and for conditionals, A B, at non-normal worlds only, if p,+i or A B,+i occurs on the branch, set pρwi 1 or A Bρwi 1; if ¬p,+i or ¬(A B),+i occurs on the branch, set pρwi0 or A Bρwi o. There are no other facts about ρ. (9.5.4) “N4 is a sub-logic of K4, but not the other way around,” because all valid formulas of N4 are valid in K4, but not all valid formulas of K4  are valid in N4. (9.5.5) “The tableaux for N4 are sound and complete with respect to the semantics” (169).

 

 

9.6

Star Again

 

(9.6.1) We can apply the N4 constructions to Routley Star ∗ semantics. (9.6.2) To the Routley semantics that we have seen before, we now add the rule for the conditional →, which gives us K. Here is the formalization:

Formally, a Routley interpretation is a structure ⟨W, ∗, v⟩, where W is a set of worlds, ∗ is a function from worlds to worlds such that w∗∗ = w, and v assigns each propositional parameter either the value 1 or the value 0 at each world. v is extended to an assignment of truth values for all formulas by the conditions:

vw(AB) = 1 if vw(A) = vw (B) = 1, otherwise it is 0. .

vw(AB) = 1 if vw(A) = 1 or vw (B) = 1, otherwise it is 0.

vwA) = 1 if vw*(A) = 0, otherwise it is 0.

| Note that vw*A) = 1 iff vw**(A) = 0 iff vw(A) = 0. In other words, given a pair of worlds, w and w* each of A and ¬A is true exactly once. Validity is defined in terms of truth preservation over all worlds of all interpretations.

(p.151-152, section 8.5.3)

Let ⟨W, ∗, v⟩ be any Routley interpretation (8.5.3). This becomes an interpretation for the augmented language when we add the following truth condition for →:

vw(A B) = 1 iff for all w′ ∈ W such that vw (A) = 1, vw(B) = 1

Call the logic that this generates, K.

(169)

(9.6.3) Priest then supplies the tableau rules for K.

 

Conjunction

Development, True (D,+x)

A ∧ B,+x

A,+x

B,+x

 

Conjunction

Development, False (D,−x)

A ∧ B,−x

↙      ↘

A,−x       B,−x

 

 Disjunction

Development, True (∨D,+x)

A ∨ B,+x

↙      ↘

A,+x        B,+x

 

 Disjunction

Development, False (∨D,−x)

A ∨ B,-x

A,-x

B,-x

 

 Negation

Development, True (¬D,+x)

¬A,+x

A,-

 

 Negation

Development, False (¬D,−x)

¬A,-x

A,+

 

 Conditional

Development, True (→D,+x)

A → B,+x

↙      ↘

A,-y      B,+y

.

where x is either i or i#; y is anything of the form j or j#, where one or other (or both) of these is on the branch

 

 Conditional

Development, False (D,−x)

A → B,-x

A,+j

B,-j

.

where x is either i or i#; y is anything of the form j or j#, where one or other (or both) of these is on the branch. j must be new.

(last two are based on p.169, and those above from p.152, section 8.5.4, with names and bottom text added, possibly mistakenly; please consult the original text)

 

(9.6.4) Priest then gives an example tableau of an invalid formula: that p ¬q ⊬ ¬(p q). (9.6.5) We make counter-models using completed open branches. On the basis of the world indicators in the branches, we assign to the formulas the values indicated by the true (+) and false (−) signs for the respective world. When there is negation, however, we need to use values in the star-companion world. “W is the set of worlds which contains wx for every x and x̄ that occurs on the branch. For all i, w*i = wi# and w*i#= wi. v is such that if p,+x occurs on the branch, vx(p) = 1, and if p,−x occurs on the branch, vx(p) = 0” (170). (9.6.6) In K, we still have the problematic valid formula: ⊨ p → (qq). We can remedy this by adding non-normal worlds to get N. “An interpretation is a structure ⟨W, N, ∗, v⟩, where N W; for all w W, w∗∗ = w; v assigns a truth value to every parameter at every world, and to every formula of the form A B at every non-normal world. The truth conditions are exactly the same as for K, except that the truth conditions for → apply only at normal worlds; at non-normal worlds, they are already given by v. Validity is defined in terms of truth preservation at normal worlds. Call this logic N” (170). (9.6.7) We make our tableau for Nthe same way as for for K, only now the rules for the conditional → only apply for world 0. We generate counter-models the same way too. (9.6.8) The tableaux for K and N are sound and complete. (9.6.9) K4 and N4 are not equivalent to K and N. For example,  K and N validate  contraposition: p q ⊨ ¬q → ¬p, but K4 and N4 do not. (9.6.10) Additionally, K4 and N4 verify p ∧ ¬q ⊨ ¬(p q), but K and N do not.

                                   

9.7

Impossible Worlds and Relevant Logic

 

(9.7.1) We will now discuss philosophical matters regarding K4 , N4 , K , and N. (9.7.2) We will now call non-normal worlds “logically impossible worlds,” because they are worlds where the laws of logic are different. (9.7.3) Just as there is no problem in conceiving physically impossible worlds, there should likewise be no problem in conceiving logically impossible worlds. (9.7.4) We already seem to suppose such logically impossible worlds when we note how certain laws of logic fail in particular non-classical logics, as for example when we say: “if intuitionist logic were correct, the law of double negation would fail.” (9.7.5) Objections to logically impossible worlds do not work. For, we cannot simply require that the laws of logic admit of no variation, when in fact that is what we are successfully and fruitfully modelling. (9.7.6) In a logically impossible world, it could still be that no normal laws of logic be broken, just like how in a physically impossible world, normally-impossible physical events can take place, but for contingent reasons happen not to. (9.7.7) Logically impossible worlds can also in fact be ones where laws of logic indeed are broken. (9.7.8) Relevant propositional logics are ones where whenever “A B is logically valid, A and B have a propositional parameter in common” (172). (9.7.9) But N4 is a relevant logic, on account of how conditionals are evaluated in normal worlds (they depend on the values in non-normal worlds) in combination with the arbitrarity of their value assignments in non-normal worlds. (9.7.10) In a similar way, N is also a relevant logic. (9.7.11) Relevant logics tend to our intuitions that there should be relevance between antecedent and consequent of conditionals, and this can be done by requiring them to share parameters. (9.7.12) There is another sort of relevant logic that is of a whole different class, called filter logics, in which “a conditional is taken to be valid iff it is classically valid and satisfies some extra constraint, for example that antecedent and consequent share a parameter” (173). (9.7.13) Relevance in our systems here however is not conditions added on top of classical validity. (9.7.14) If we wanted to keep this system but reserve a real world where truth operates in a more conventional way, then we can designate an @ actual world that has certain constraints. For example, we could add exhaustion and exclusion constraints to eliminate truth gaps and gluts in the actual real world @.

 

 

9.7a

Logics of Constructible Negation

 

(9.7a.1) We will now examine logics of constructible negation, which add an account of negation to the negationless part of intuitionistic logic (or positive intuitionistic logic). The important feature of these logics is that “unlike intuitionist logic, they treat truth and falsity even-handedly” (175). (9.7a.2) We will “Consider interpretations of the form ⟨W, R, ρ⟩, where W is the usual set of worlds, R is a reflexive and transitive binary relation on W, and for every w W, and propositional parameter, p, ρw relates p to 1, 0, both or neither, subject to the heredity constraints: if pρw1 and wRw′, then pρw1 ; if pρw0 and wRw′, then pρw0 (175). (9.7a.3) Priest next provides the truth/falsity conditions for the connectives in our logic of constructible negation.

 

A Bρw1 iff Aρw1 and Bρw1

A Bρw0 iff Aρw0 or Bρw0

 

A Bρw1 iff Aρw1 or Bρw1

A Bρw0 iff Aρw0 and Bρw0

 

¬Aρw1 iff Aρw0

¬Aρw0 iff Aρw1

 

A Bρw1 iff for all w′ such that wRw′, either it is not the case that Aρw1 or Bρw1

A Bρw0 iff Aρw1 and Bρw0

(175)

Validity is truth preservation in all worlds of all interpretations. (9.7a.4) Priest next gives the tableau rules for I4.

 

 Double Negation

Development, True (¬¬D,+)

¬¬A,+i

A,+i

 

 Double Negation

Development, False (¬¬D,−)

¬¬A,−i

A,−i

 

Conjunction

Development, True (D,+)

A ∧ B,+i

A,+i

B,+i

 

Conjunction

Development, False (D,−)

A ∧ B,−i

↙   ↘

A,−i      B,−i

 

 Negated Conjunction

Development, True (¬D,+)

¬(A ∧ B),+i

¬A ¬B,+i

 

 Negated Conjunction

Development, False (¬D,−)

¬(A ∧ B),−i

¬A ¬B,−i

 

 Disjunction

Development, True (∨D,+)

A ∨ B,+i

↙   ↘

A,+i      B,+i

 

 Disjunction

Development, False (∨D,−)

  A B,-i

A,-i

B,-i

 

 Negated Disjunction

Development, True (¬D, +)

¬(A ∨ B),+i

¬A ¬B,+i

 

 Negated Disjunction

Development, False(¬D, -)

¬(A ∨ B),-i

¬A ¬B,-i

(p.165, section 9.3.3; titles for the rules are my own additions)

 

 Conditional

Development, True (D, +)

A ⊐ B,+i

irj

↙     ↘

A,-j       B,+j

.

j is any number on the branch

 

Conditional

Development, False (D,-)

A ⊐ B,-i

irj

A,+j

B,-j

.

j is new to the branch

(176, titles for the rules are my own additions)

 

Negated Conditional

Development, True (¬D,+)

¬(A ⊐ B),+i

A,+i

¬B,+i

 

Negated Conditional

Development, False (¬D,-)

¬(A ⊐ B),-i

↙     ↘

A,-i      ¬B,-i

 

ρ, Reflexivity (ρrD)

ρ

.

iri

 

τ, Transitivity (τrD)

τ

irj

jrk

.irk

(see p.38, section 3.3.2; with my naming additions)

 

Heredity, Unnegated, True (hD)

p,+i

.irj

p,+j

.

p is any propositional parameter

 

Heredity, Negated, True (¬hD)

¬p,+i

.irj

¬p,+j

.

p is any propositional parameter

(176, with my naming additions)

 

(9.7a.5) Priest then does some example tableaux to show that ⊢I4 ¬¬A A, and ⊬I4 (p ∧ ¬p) ⊐ q. (9.7a.6) Counter-models are formed in the following way. “There is a world wi for each i on the branch; for propositional parameters, p, if p,+i occurs on the branch, set pρwi1; if ¬p,+i occurs on the branch, set pρwi0. ρ relates no parameter to anything else” (p.166). “wiRwj  iff irj occurs on the branch” (p.27). (9.7a.7) We obtain I3 by adding the Exclusion Constraint to I4: “for no p and W, pρw1 and pρw0.” (This makes it similar to  K3.) Our tableaux for I3 have the additional branch closure rule that a branch closes when both a propositional parameter and its negation are true in some same world. (9.7a.8) Formulas lacking negation that are valid in I are also valid in I4 and I3. (9.7a.9) But negation behaves differently in I than it does in I4 and I3. (9.7a.10) By changing I4’s conditional rule for falsity and the tableau rule for negated conditional, we get a logic called W.

A Bρw0 iff A ⊐ ¬Bρw1 (i.e., for all w′ such that wRw′, either it is not the case that Aρw1 or Bρw 0).

 

Negated Conditional

Development, True (¬D,+)

¬(A ⊐ B),±i

A ⊐ ¬B,±i

(178, naming is my own)

 

(9.7a.11) W is a connexive logic, meaning that there are two particular inferences it has: Aristotle ¬(A ⊐ ¬A) and Boethius (A B) ⊐ ¬(A ⊐ ¬B). (9.7a.12) Unlike all other logics we deal with, connexive logics are not sub-logics of classical logic; for, not all the inferences that are valid in connexive logics are also valid in classical logic, here especially: Aristotle and Boethius. (9.7a.13) Aristotle and Boethius have intuitive appeal, despite being heterodox principles of conditionality. (9.7a.14) Another feature that W has that the others of this book lack is that its class of logical truths is inconsistent, namely, (p ∧ ¬p) ⊐ ¬(p ∧ ¬p) contradicts Aristotle ¬(A ⊐ ¬A). (9.7a.15) The tableaux for I4, I3, and W are sound and complete.

 

 

 

 

 

ch.10

Relevant Logics

 

10.1

Introduction

 

(10.1.1) In this chapter Priest will introduce relevant logics, which “are obtained by employing a ternary relation to formulate the truth conditions of →” and which can be made stronger by adding constraints to that ternary relation. (10.1.2) Also we will combine relevant semantics with the semantics of conditional logics “to give an account of ceteris paribus enthymemes” (188).

 

 

10.2

The Logic B

 

(10.2.1) We can strengthen relevant logics like N4 and Nto accommodate certain intuitively correct principles regarding the conditional by incorporating non-normal worlds and a ternary accessibility relation on worlds, Rxyz. (10.2.2) The intuitive sense of the ternary relation Rxyz is: for all A and B, if A B is true at x, and A is true at y, then B is true at z. (10.2.3) We will focus on the ternary relation ∗ semantics, as they have been the ones studied historically speaking. (10.2.4) The ternary ∗ interpretation is a structure, ⟨W, N, R, ∗, v⟩, where “W is a set of worlds, N W is the set of normal worlds (so that W N is the set of non-normal worlds)”, “for all w W, w∗∗ = w; v assigns a truth value to every parameter at every world, and to every formula of the form A B at every non-normal world,” and R is any ternary relation on worlds. (So, technically, R W × W × W.)” (167; 170; 189). (10.2.5) Priest next gives the truth conditions for connectives.

vw(AB) = 1 if vw(A) = vw (B) = 1, otherwise it is 0.

vw(AB) = 1 if vw(A) = 1 or vw (B) = 1, otherwise it is 0.

vwA) = 1 if vw*(A) = 0, otherwise it is 0.

(p.151, section 8.5.3; p.169. section 9.6.6, see 9.6.2)

at normal worlds, the truth conditions for → are:

vw(A B) = 1 iff for all x W such that vx(A) = 1, vx(B) = 1

The exception is that if w is a non-normal world:

vw(A B) = 1 iff for all x, y W such that Rwxy, if vx(A) = 1, then vy(B) = 1

(189)

(10.2.6) Validity is truth preservation over all normal worlds. (10.2.7) This logic is named B, and it is a sub-logic of K, while N is a sub-logic of B. (10.2.8) The normality condition is Rwxy iff x = y. By implementing it in the conditional rule, we can simply it so that it works for all worlds: vw(A B) = 1 iff for all x W such that vx(A) = 1, then vx(B) = 1. (10.2.9) Finally Priest notes that “the normality condition falls apart into two halves. From left to right: if Rwxy then x = y and from right to left, since x = x: Rwxx” (190).

 

 

10.6

The Ternary Relation

 

(10.6.1) We turn now to philosophical issues regarding the meaning of the ternary relation and its use for giving the truth-conditions for the conditional. (10.6.2) One possible interpretation of the ternary relation is that we “read Rxyz as meaning that z contains all the information obtainable by pooling the information x and y. This makes sense of the truth conditions of →” (207). (10.6.3) This understanding of the ternary relation in terms of information systems x and y being pooled into z leads to the validation of the irrelevant B → (A A). (10.6.4) Another interpretation of the ternary relation is that worlds are conduits of information and A B is an information flow, like how “a fossilised footprint allows information to flow from the situation in which it was made, to the situation in which it is found. Rxyz is now interpreted as saying that the information in y is carried to z by x” (207). (So we might understand the Rxyz ternary relation as meaning something like: for all pieces of information A and B, if there is a flow of information A B  at conduit x (which is a conduit of information from y to z), and A is true at y, then B is true at z.) (10.6.5) But this metaphor of information flow is not entirely transparent, and it may even lead to irrelevant inferences. For example, “if a situation carries any information at all, it would appear to carry the information that there is some source from which information is coming. Call this statement S. If this is the case, then the inference from A B to A → S would appear to be valid. But this would seem to give a violation of relevance, since A itself may have nothing to do with S” (207-208). (10.6.6) As of right now, ternary relation semantics and this notion of information flow are both too new to sufficiently do more than simply provide “a model-theoretic device for establishing various formal facts about various relevant logics,” where in addition it should also “justify the fact that some inferences concerning conditionals are valid and some are not.” For this, we need “some acceptable account of the connection between the meaning of the relation and the truth conditions of conditionals.”




ch.11

Fuzzy Logics

 

11.1

Introduction

 

(11.1.1) In this chapter we examine fuzzy logic, which assigns to sentences truth values of any real number between 0 and 1. (11.1.2) We will also discuss vagueness, which is one of the main philosophical motivations for fuzzy logic, and we will discuss fuzzy logic’s relation to relevant logics. (11.1.3) We also examine fuzzy conditionals, including how modus ponens fails in fuzzy logic.

 

 

11.2

Sorites Paradoxes

 

(11.2.1) Priest first illustrates the sorites paradox. A person begins at age five and is thus a child. One second after that the person is still a child. Thus also one second after that new second the person is still a child. No additional second will cause the child to definitively cease being a child and start being an adult. However, after 30 years, we know that the person is now an adult. (11.2.2) The sorites paradox results from vague predicates like “is a child,” where , “very small changes to an object (in this case, a person) seem to have no effect on the applicability of the predicate” (221). (11.2.3) Many other vague predicates, like “is tall,” “is drunk,” “is red,” “is a heap,” and even “is dead,” can all be used to construct sorites paradoxes. (11.2.4) We can structure the sorites paradox as a chain of modus ponens inferences where we say that something begins at a certain state at a certain time, and next that if something is so at that time it is so in the next second, and we repeat that indefinitely, never arriving upon the state we know it will change into.

 

 

11.3

. . . and Responses to Them

 

(11.3.1) We will now consider responses to the sorites paradox, where for

M0, M1, . . . , Mk

M0 is definitely true but Mk is definitely false. We will try to understand what is going on, logically speaking, between M0 and Mk. (11.3.2) If we simply think that every sentence is either simply true or simply false, then we can break the paradoxical chain at some point where we say for example that the person is a child at this moment and in the next one they are an adult (“there must be a unique i such that Mi is true, and Mi+1 is false. In this case, the conditional MiMi+1 is false, and the sorites argument is broken” (222).) But, while that solves the paradox, it goes against our intuition that the change is continuous and thus there can be no such discrete leap happening from one instant to the next (and thus no discrete jump from truth to falsity). (11.3.3) Some still think that there are these discrete leaps of truth value in continuous changes, and the reason it strikes us as counterintuitive is simply because we lack the means to know where exactly the change takes place. (11.3.4) Those arguing the above claim – that there is a discrete truth-value break but we cannot know it – use the following reasoning. We can only know true things, and we can only make judgements from evidence. The discrete truth-value shift in the actual change will make Mi true but Mi+1 false. However, the evidential basis will be the same for both. This means that when we make the judgment Mi+1 (on the basis of the misleading evidence that is the same from the prior moment), we are making a false judgment, and so we can never know when the shift happens. (11.3.5) The main problem with this argument is that the real problem with the paradox is not that we cannot know where the change happens but that there could even be such a sharp cut-off point in a continuous change. (11.3.6) Another proposal is that cases of vagueness require that we reject a bivalent dichotomy between simple truth and falsity, and so for sorites changes, there would be a middle part where the sentences are {1} neither true nor false, or {2} both true or false. (11.3.7) One three-valued solution is using K3 (and perhaps in addition supervaluation). “In this case, there is some i, such that Mi is true and Mi+1 is neither true nor false. Again, MiMi+1 is not true, and so the sorites argument fails” (223). (11.3.8) But three-valued solutions suffer from the same counter-intuitiveness: it is hard to accept that there is a discrete boundary between truth and the middle value. (11.3.9) So since the changes are continuous, we might want to use a fuzzy logic where the truth-values come in continuous degrees too. (11.3.10) But even fuzzy logic has this same problem, because somewhere there must be a change from completely true to less than completely true.

 

 

11.4

The Continuum-valued Logic Ł

 

(11.4.1) One way to construct a fuzzy logic is as a many-valued logic with a continuum of values from 0 (completely false) to 1 (completely true), including all real number values between, such that 0.5 is half true, and so on. (11.4.1a) To formulate the semantics for the connectives, we for now will use the oldest and most philosophically interesting means to do so. (11.4.2) Priest next gives the semantics for the connectives in our continuum many-valued logic.

f¬ (x) = 1 − x

f(x, y) = Min(x, y)

f(x, y) = Max(x, y)

f(x, y) = x y

where Min means ‘the minimum (lesser) of’; Max means ‘the maximum (greater) of’; and x y is a function defined as follows:

if x y, then x y = 1

if x > y, then x y = 1 − (x y) (= 1 − x + y)

| Note that we could say ‘x y’ instead of ‘x > y’ in the second clause, since if x = y, 1 − (x y) = 1. Note, also, that we could define x y equivalently as Min(1, 1 − x + y).

(225-226)

(11.4.3) The formulations of the semantic evaluations for the connectives in fuzzy logic hold to the basic intuitions we have about how they should operate. (11.4.4) Priest next notes that: if x y, then y z x ; and if x y, then z x z y . (11.4.5) Our continuum-valued fuzzy logic “is a generalisation of both classical propositional logic, and Łukasiewicz’ 3-valued logic;” for, if we use only 1 and 0, we get the outcomes for classical semantics, and if we use just 0, 0.5, and 1 (with 0.5 understood as i), we get the outcomes for Ł3. (11.4.6) The designated value is context dependent, and so “any context will determine a number, ε, somewhere between 0 and 1, such that the things that are acceptable are exactly those things with truth value x, where x ≥ ε” (226). (11.4.7) Validity is defined as: “Σ ⊨ε A iff for all interpretations, v, if v(B) ≥ ε for all B ∈ Σ, then v(A) ≥ ε” (226). (In other words, an inference is valid under the following condition: whenever the premises are at least as high as the ((context-determined designated fuzzy)) value ε, then so too is the conclusion at least as high as ε. (11.4.8) Our fuzzy logic is called Ł, and its context-independent definition of validity is: Σ ⊨ A iff for all ε, where 0 ≤ ε ≤ 1, Σ ⊨ε A . (11.4.9) A set of truth-values X can be listed in descending numerical order. Suppose it is in an infinite set following a pattern like {0.41, 0.401, 0.4001, 0.40001, . . .}. Even though there would be no least member, there is still however a number that would be the greatest possible figure that is still less than or equal to all the members, in this case being 0.4. And it is called the greatest lower bound of set X, abbreviated as Glb(X). (11.4.10) A simpler characterization of validity would be: Σ ⊨ A iff for all v, Glb(v[Σ]) ≤ v(A). (11.4.11) Given the semantic evaluation for conjunction and the conditional, we can formulate validity in the following way: {B1, . . . , Bn} ⊨ A iff for all v, v((B1 ∧ . . . ∧ Bn) → A) = 1. “Thus (for a finite number of premises), validity amounts to the logical truth of the appropriate conditional when the set of designated values is just {1}, that is, the logical truth of the conditional in ⊨1. The logic with just 1 as a designated value is usually written as Ł, and it is called Łukasiewicz’ continuum-valued logic” (227).

 

 

 

 

ch.11a

Appendix: Many-valued Modal Logics

 

11a.1

Introduction

 

(11a.1.1) In this chapter we will examine many-valued modal logics. (11a.1.2) First we examine the general structure of a many-valued modal logic, illustrated with Łukasiewicz continuum-valued modal logic. (11a.1.3) We will also examine many-valued modal First Degree Entailment logics, including modal K3 and modal LP. (11a.1.4) We will end the chapter with a discussion of future contingents.

 

 

11a.2

General Structure

 

(11a.2.1) A “propositional many-valued logic is characterised by a structure ⟨V, D, {fc : c C}⟩, where V is the set of semantic values,  D V is the set of designated values, and for each connective, c, fc is the truth function it denotes. An interpretation, v, assigns values in V to propositional parameters; the values of all formulas can | then be computed using the fcs; and a valid inference is one that preserves designated values in every interpretation” (242-243). (11a.2.2) We will assume that the set of truth-values of V are ordered from lesser to greater (or to equal than: ≤) and that “every subset of the values has a greatest lower bound (Glb) and least upper bound (Lub) in the ordering” (242). (11a.2.3) Our many-valued modal logic adds to many-valued logic “the monadic operators, □ and ◊ in the usual way” (242). (11a.2.4) “An interpretation for a many-valued modal logic is a structure ⟨W, R, SL, v⟩, where W is a non-empty set of worlds, R is a binary accessibility relation on W, SL is a structure for a many-valued logic, L, and for each propositional parameter, p, and world, w, v assigns the parameter a value, vw(p), in V” (242). (11a.2.5) The value of a connective on formulas in a certain world is determined by the value generated by the corresponding connective function operating recursively on the truth values of those formulas in that particular world: “if c is an n-place connective vw(c(A1, . . . , An)) = fc(vw(A1), . . . , vw(An))” (242). (11a.2.6) The truth-conditions for the modal operators are:

vw(□A) = Glb{vw(A) : wRw′}

vw(◊A) = Lub{ vw(A) : wRw′}

(11a.2.7) Validity is defined in the following way: “Σ ⊨ A iff for every interpretation, ⟨W, R, SL, v⟩, and for every w W, whenever vw(B) ∈ for every B ∈ Σ, vw(A) ∈ ” (242). (11a.2.8) Our many-valued modal logic is called KL, and we can apply the accessibility relation constraints on it to derive stronger logics, like KLρ, KLσ, KLρτ, and so forth.

 

 

11a.4

Modal FDE

 

(11a.4.1) In our many-valued modal FDE logic, we have just the connectives ∧, ∨ and ¬, because A B is defined as ¬A B. (11a.4.2) FDE can be formulated as a four-valued logic, and the connectives can be evaluated with the following diamond lattice (Hasse diagram):

 

1

↗               ↖

b                                 n

↖                   ↗

0

 

f is the greatest lower bound;  f is the least upper bound; and f¬ maps 1 to 0, vice versa, and each of b and n to itself. (11a.4.3) Our modal FDE logic is called KFDE. “If we ignore the value n in the non-modal case (that is, we insist that formulas take one of the values in {1, b, 0}) we get the logic LP. In the modal case, we get KLP. If we ignore the value b in the non-modal case, we get the logic K3. In the modal case, we get KK3” (245). (11a.4.4) The equivalence between four-valued FDE truth-values and the four value-situations of the relational semantics are: v(A) = 1 iff Aρ1 and it is not the case that Aρ0 ; v(A) = b iff Aρ1 and Aρ0 ; v(A) = n iff it is not the case that Aρ1 and it is not the case that Aρ0 ; and v(A) = 0 iff it is not the case that Aρ1 and Aρ0 . The truth-falsity conditions for the connectives in the relational semantics are: A Bρ1 iff Aρ1 and Bρ1 ; A Bρ0 iff Aρ0 or Bρ0 ; A Bρ1 iff Aρ1 or Bρ1 ; A Bρ0 iff Aρ0 and Bρ0 ; ¬Aρ1 iff Aρ0 ; and ¬Aρ0 iff Aρ1 . “Validity is defined in terms of the preservation of relating to 1” (245). (11a.4.5) KFDE has the same relational semantics but with world designations and with the following rules for the necessity and possibility operators:

vw(A) = 1 iff Aρw1 and it is not the case that Aρw0

vw(A) = b iff Aρw1 and Aρw0

vw(A) = n iff it is not the case that Aρw1 and it is not the case that Aρw0

vw(A) = 0 iff it is not the case that Aρw1 and Aρw0

 

A Bρw1 iff Aρw1 and Bρw1

A Bρw0 iff Aρ0 or Bρw0

 

A Bρw1 iff Aρw1 or Bρw1

A Bρw0 iff Aρw0 and Bρw0

 

¬Aρw1 iff Aρw0

¬Aρw0 iff Aρw1

 

Aρw1 iff for all w′ such that wRw′, Aρw1

Aρw0 iff for some w′ such that wRw′, Aρw0

 

Aρw1 iff for some w′ such that wRw′, Aρw1

Aρw0 iff for all w′ such that wRw′, Aρw0

(based on with quotation from p.245-246)

(11a.4.6) Next Priest provides the argumentation for why the truth/falsity conditions are formulated the way they are. (11a.4.7) By adding a possible worlds exhaustion constraint to KFDE we can get KLP. (11a.4.8) By adding a possible worlds exclusion constraint to KFDE we can get KK3. (11a.4.9) By adding both the exhaustion and exclusion constraints to KFDE, we get the classical modal logic K. (11a.4.10) We can apply the world accessibility relation constraints (like ρ, σ, τ, etc) to our specific many-valued modal logics to get such extensions as: KFDEρ, KLPρτ, KK3σ and so forth. (11a.4.11) There are no logical truths (tautologies) in KFDE and KK3, because to be a logical truth means that the formula is true under all interpretations. But in KFDE and KK3, under any interpretation, every formula can be valued neither true nor false, and thus no formulas can be logical truths. (11a.4.12) The interpretations for the logics of the family we are considering are monotonic. (11a.4.13) A corollary of this is “that ⊨K A iff ⊨KLPA (and similarly for Kρ and KLPρ, etc.)” (247).

 


11a.7

Future Contingents Revisited

 

(11a.7.1) We may use many-valued modal logics for contending with the problem of future contingents. Aristotle’s analysis of them lends to the many-valued solutions. We begin with the intuition that there are contingent future events for which there are presently no facts that could make them true or false, (as for example, “The first pope in the twenty-second century will be Chinese” (132)). We then suppose that right now a statement about a future contingent is true (or false). That would mean that it cannot be otherwise, and thus fatalism would hold if future contingents are presently either true or false. But that goes against our original intuition that nothing in the present makes such statements true or false, so Aristotle concludes that statements about future contingents cannot have either the value true or false. (11a.7.2) Priest next quotes the important Aristotle passage for this discussion of future contingents, from De Int. 18b10–16.

. . . if a thing is white now, it was true before to say that it would be white, so that of anything that has taken place, it was always true to say ‘it is’ or ‘it will be’. But if it was always true to say that a thing is or will be, it is not possible that it should not be or not come to be, and when a thing cannot not come to be, it is impossible that it should not come to be, and when it is impossible that it should not come to be, it must come to be. All then, that is about to be must of necessity take place. It results from this that nothing is uncertain or fortuitous, for if it were fortuitous it would not be necessary. 

(251-252, quoting De Int. 18b10–16. Translation from Vol. 1 of Ross (1928).)

(11a.7.3) This argument may be read in the following way. “Let q be any statement about a future contingent event. Let Tq be the statement that it is (or was) true that q. Then □(Tqq). Hence Tq ⊃ □q. And since □q is not true, neither is Tq. A similar argument can be run for ¬q. So neither Tq nor T¬q holds. Read in this way, the reasoning contains a modal fallacy (passing from □(A B) to (A ⊃ □B))” (252). (11a.7.4) The above reading is incorrect, because Aristotle holds that the past and present are unchangeable and thus necessary. So the inference from □(Tqq) should be □Tq ⊃ □q, which is valid. (11a.7.5) The above argument can be formulated without the conditional or the Tq formula. We have the statement about the future, q. “If q were true, this would be a present fact, and so fixed; that is, it would be necessarily true, that is: q ⊨ □q. Similarly, if it were false, it would be necessarily false: ¬q ⊨ □¬q. Since neither □q nor □¬q holds, neither q nor ¬q holds” (252). (11a.7.6) Aristotle does not allow exceptions to the Law of Non-Contradiction. So the sort of many-valued modal logic we use in application to his Future  Contingents argument should validate it. Thus we should not use KFDE or KLP but rather KK3, in which there is the option for formulas to be neither true nor false, but not the option for contradictions. (11a.7.7) In our many-valued modal logic, we indicate futurity with the R accessibility relation:“Think of the accessibility statement wRw′ as meaning that w′ may be obtained from w by some number (possibly zero) of further things happening” (252). Given the nature of time, R is reflexive and transitive but not symmetrical. To capture Aristotle’s assumption that “once something is true/false, it stays so,” we will use a modified heredity constraint called the Persistence Constraint: “for every propositional parameter, p, and world, w:

If pρw1 and wRw′, pρw1

If pρ w0 and wRw′, pρ w0

(11a.7.8) The persistence constraint does not hold for modalized formulas. (11a.7.9) Our many-valued K3ρτ logic, augmented by the Persistence Constraint, is called A (for Aristotle). “In this logic p ⊨ □p and ¬p ⊨ □¬p. Aristotle’s argument therefore works. But, of course, in A, p ¬p may fail to be true.” (11a.7.10) For our Aristotle logic A, neither □p not □¬p holds. However, Aristotle thinks that eventually p or ¬p will have to hold, thus he thinks □(p ¬p). Yet, this does not hold in logic A. (11a.7.11) To allow □(p ¬p) to hold in logic A, we can take a temporal perspective of the end of time when everything has been decided. “Call a world complete if every propositional parameter is either true or false. A natural way of giving the truth conditions for □ is as follows: 

Aρw1 iff for all complete w′ such that wRw′, Aρw′1

Aρw0 iff for some complete w′ such that wRw′, Aρw′ 0

The truth/falsity conditions for ◊ are the same with ‘some’ and ‘all’ interchanged. □A may naturally be seen as expressing the idea that A is inevitable. [...] for any complete world, w, Persistence holds for all formulas. It follows that at such a world, A is true iff □A is, and that all formulas are either true or false” (254). (11a.7.12) These above revised truth/falsity conditions for necessity allow us to capture the important assumptions and valid inferences in Aristotle’s argumentation regarding future contingency, namely: “p ⊨ □p, ¬p ⊨ □¬p (so Aristotle’s argument still works), ⊨ □(p ∨ ¬p), but not ⊨ □p ∨ □¬p” (254).

 

 



Part II

Quantification and Identity

 

 

 

ch.12

Classical First-order Logic

 

12.1

Introduction

 

In this chapter, we examine the semantics and tableaux of classical first-order logic, along with problems and certain technical issues involved in it.

 

 

12.2

Syntax

 

Our first-order language has the following vocabulary:

• variables: v0, v1, v2, ...
• constants: k0, k1, k2, ...
• for every natural number n > 0, n-place predicate symbols: P0n, P1n, P2n, ...
• connectives: ∧, ∨, ¬, ⊃, ≡
• quantifiers: ∀, ∃
• brackets: (, )

Specifically we may use:

x, y, z for arbitrary variables

a, b, c for arbitrary constants

Pn, Qn, Sn for arbitrary n-place predicates

A, B, C for arbitrary formulas

• Σ, Π for arbitrary sets of formulas

Its grammar includes the following:

• Any constant or variable is a term.

The formulas are specified recursively as follows.

• If t1, ... , tn are any terms and P is any n-place predicate, Pt1 .. tn is an (atomic) formula.

• If A and B are formulas, so are the following:

(A B), (A B), ¬⁠A, (A B), (A B).

• If A is any formula, and x is any variable, then ∀xA, ∃xA are formulas. I will omit outermost brackets in formulas.

And regarding quantified formulas:

• An occurrence of a variable, x, in a formula, is said to be bound if it occurs in a context of the form ∃x ... x ... or ∀x ... x ....

• If it is not bound, it is free.

• A formula with no free variables is said to be closed.

Ax(c) is the formula obtained by substituting c for each free occurrence of x in A.

 

 

12.3

Semantics

 

Regarding the semantics of classical first-order logic, we say that:

An interpretation of the language is a pair, ℑ = ⟨D, v⟩. D is a non-empty set (the domain of quantification); v is a function such that:

• if c is a constant, v(c) is a member of D

• if P is an n-place predicate, v(P) is a subset of Dn

(Dn is the set of all n-tuples of members of D, {⟨d1, ..., dn⟩: d1, ..., dn D}. By convention, ⟨d⟩ is just d, and so D1 is D.)

(Priest 264)

All formulas have a truth value. To evaluate them, since they use variables which are substitutable by constants,

we extend the language to ensure that every member of the domain has a name. For all dD, we add a constant to the language, kd, such that v(kd) = d. The extended language is the language of ℑ, and written L(ℑ). The truth conditions for (closed) atomic sentences are:

v(Pa1 ... an) = 1 iff  ⟨v(a1), ..., v(an)⟩ ∈ v(P) (otherwise it is 0)

The truth conditions for the connectives are as in the propositional case (1.3.2).

(Priest 265)

v(¬A) = 1 if v(A) = 0, and 0 otherwise.
v(A ∧ B) = 1 if v(A) = v(B) = 1, and 0 otherwise.
v(A ∨ B) = 1 if v(A) = 1 or v(B) = 1, and 0 otherwise.
v(A ⊃ B) = 1 if v(A) = 0 or v(B) = 1, and 0 otherwise.
v(A ≡ B) = 1 if v(A) = v(B), and 0 otherwise.

(Priest 5)

For the quantifiers:

v(∀xA) = 1 iff for all d D, v(Ax(kd)) = 1 (otherwise it is 0)

v(∃xA) = 1 iff for some d D, v(Ax(kd)) = 1 (otherwise it is 0)

(Priest 265)

We define validity in the following way:

Validity is a relationship between premises and conclusions that are closed formulas, and is defined in terms of the preservation of truth in all interpretations, thus: Σ ⊨ A iff every interpretation that makes all the members of Σ true makes A true.

(Priest 265)

We find the following equivalences in classical first-order logic:

v(¬∃xA) = v(∀x¬A)

v(¬∀xA) = v(∃x¬A)

v(¬∃x(Px ∧ A)) = v(∀x(Px ⊃ ¬A)

v(¬∀x(Px ⊃ A) = v(∃x(Px∧¬A))

(Priest 265)

And lastly we note [something regarding denotation, namely] that

If C is some set of constants such that every object in the domain has a name in C, then:

v(∀xA) = 1 iff for all c C, v(Ax(c)) = 1 (otherwise it is 0)

v(∃xA) = 1 iff for some c C, v(Ax(c)) = 1 (otherwise it is 0)

(Priest 265)

 

 

12.4

Tableaux

 

(12.4.1) The tableau rules for classical first-order logic add four quantifier rules to those from propositional logic.

 

 Double Negation

Development (¬¬D)

¬¬A

A

 

Conjunction

Development (D)

A ∧ B

A

B

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B)

↙   ↘

¬A       ¬B

 

 Disjunction

Development (∨D)

A ∨ B

↙   ↘

A      B

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B)

¬A

¬B

 

 Conditional

Development (⊃D)

A ⊃ B

↙    ↘

¬A        B

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B)

A

¬B

 

 Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

B        ¬B

 

 Negated Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

¬B         B

 

 Negated Existential

Development (¬∃D)

¬∃xA

x¬A

 

 Negated Universal

Development (¬∀D)

¬xA

x¬A

 

 Universal Instantiation

Development (UI,D)

xA

Ax(a)

 

where a is any constant on the branch. (If there are not any, we select one at will.)

 

 Particular Instantiation

Development (PI,D)

xA

Ax(c)

 

where c is any constant that does not occur so far on the branch.

(6-8; 266, with names and additional text at the bottom made by me)

 

Note: we should never check-off lines of the form ∀xA, because it is always possible that we will later introduce a new constant and then need to repeat universal instantiation on it. Yet, it can be useful to write beside the line all the constants it has been instantiated with. (12.4.2) We cannot apply a rule to an internal part of a line but only to the whole line itself. So we should not have:

¬(A ∧ ∀xB)

¬(ABx(a))

(12.4.3) Priest will “write Σ ⊢ A to mean that there is a closed tableau whose initial list comprises the members of Σ together with the negation of A” (266-267). (12.4.4) It does not matter what order we apply the tableau rules. (12.4.5) Priest then gives a first example tableau for a valid inference. (12.4.6) Priest next gives a second example tableau for a valid inference. (12.4.7) Priest then gives a third example tableau, this time  for an invalid inference. (12.4.8) We construct counter-models from open branches in the following way: “take a domain which contains a distinct object, ∂b, for every constant, b, on the branch. v(b) is ∂b. v(P) is the set of n-tuples ⟨∂b1, . . . , ∂bn⟩ such that Pb1 . . . bn occurs on the branch. Of course, if ¬Pb1 . . . bn is on the branch, ⟨∂b1, . . . , ∂bn⟩ ∉ v(P), since the branch is open. (If a predicate or constant does not occur on the branch, the value given to it by v is a don’t care condition: it can be anything one likes)” (268). (12.4.9) We can check which formulas are true by seeing if the model makes them so. (12.4.10) Tableaux can be infinite. One way this happens is if there is an endless generation of new constants on account of repeating applications of instantiation. (12.4.11) When tableaux are infinite, we can use guesswork to formulate counter-models. (12.4.12) Priest next provides an algorithm for ensuring that infinite tableaux are complete: “(1) For each branch in turn (there is only a finite number at any stage of the construction), we run down the formulas on the branch, applying any rule that generates something not already on the branch. (In the case of a rule such as UI, which has multiple applications, we make all the applications possible at this stage.) (2) We then go back and repeat the process” (270). (12.4.13) “The tableaux are sound and complete with respect to the semantics” (270). (12.4.14) In order to show the behavior of quantifiers, Priest lists the ways that the quantifiers interact with the propositional operators.

In classical logic, the interactions are as follows. ‘A ⊣⊢ B’ means ‘AB and BA’. C is any closed formula. A * at the end of a line indicates that the converse does not hold, in the sense that there are instances that are not valid. (So, for example, in the first line for Negation, if A is Px, we have ¬∀xPx ⊬ ∀x¬Px.) Where the converse does not hold, there is often a restricted version involving a closed formula that does. Where this exists, it is given on the next line. [...]

1. No Operators

(a) ∀xC ⊣⊢ C

(b) ∃xC ⊣⊢ C

2. Negation

(a) ∀x¬A ⊢ ¬∀xA *

(b) ¬∀xC ⊢ ∀x¬C

(c) ¬∃xA ⊢ ∃x¬A *

(d) ∃x¬C ⊢ ¬∃xC

(e) ¬∃xA ⊣⊢ ∀x¬A

(f) ¬∀xA ⊣⊢ ∃x¬A

3. Disjunction

(a) ∀xA ∨ ∀xB ⊢ ∀x(AB) *

(b) ∀x(AC) ⊢ ∀xAC

(c) ∃xA ∨ ∃xB ⊣⊢ ∃x(AB)

(d) ∀xA ∨ ∀xB ⊢ ∃x(AB) *

(e) ∀x(AB) ⊢ ∃xA ∨ ∃xB *

4. Conjunction

(a) ∀xA ∧ ∀xB ⊣⊢ ∀x(AB)

(b) ∃x(AB) ⊢ ∃xA ∧ ∃xB *

(c) ∃xAC ⊢ ∃x(AC)

(d) ∀xA ∧ ∀xB ⊢ ∃x(AB) *

(e) ∀x(AB) ⊢ ∃xA ∧ ∃xB *

5. Conditional

(a) ∀x(AB) ⊢ ∀xA ⊃ ∀xB *

(b) C ⊃ ∀xB ⊢ ∀x(CB)

(c) ∃xA ⊃ ∃xB ⊢ ∃x(AB) *

(d) ∃x(CB) C ⊃ ∃xB

(e) ∀x(AB) ⊢ ∃xA ⊃ ∃xB *

(f) ∃xAC ⊢ ∀x(AC)

(g) ∀xA ⊃ ∀xB ⊢ ∃x(AB) *

(h) ∃x(AC) ⊢ ∀xAC

(270-272)

 

 

12.5

Identity

 

(12.5.1) The identity predicate is a binary predicate symbolizes as ‘=’ and formulated as ‘a1 = a2’. We can write ‘¬a1 = a2’ as ‘a1a2’. (12.5.2) “In any interpretation, ⟨D, v⟩, v(=) is {⟨d, d⟩: dD}. That is, ⟨d, e⟩ is in the denotation of the identity predicate, just if d is e” (272). (12.5.3) We add two tableau rules for identity to our given list. One is like the principle of identity, where we can always add in a tableau a formula of the form a = a. The other is the Substitutivity of Identicals, where whenever we have a formula of the form a = b, we can always substitute one term for the other in the formulas.

 

 Double Negation

Development (¬¬D)

¬¬A

A

 

Conjunction

Development (D)

A ∧ B

A

B

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B)

↙   ↘

¬A       ¬B

 

 Disjunction

Development (∨D)

A ∨ B

↙   ↘

A      B

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B)

¬A

¬B

 

 Conditional

Development (⊃D)

A ⊃ B

↙    ↘

¬A        B

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B)

A

¬B

 

 Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

B        ¬B

 

 Negated Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

¬B         B

 

 Negated Existential

Development (¬∃D)

¬∃xA

x¬A

 

 Negated Universal

Development (¬∀D)

¬xA

x¬A

 

 Universal Instantiation

Development (UI,D)

xA

Ax(a)

 

where a is any constant on the branch. (If there are not any, we select one at will.)

 

 Particular Instantiation

Development (PI,D)

xA

Ax(c)

 

where c is any constant that does not occur so far on the branch.

 

Principle of Identity

Development (=D)

.

a = a

 

(You can always add a line of the form a = a)

 

Substitutivity of Identicals (SI,D)

a = b

Ax(a)

Ax(b)

 

where A is any atomic sentence distinct from a = b.

(6-8; 266, 272, with names and additional text at the bottom made by me)

 

(12.5.4) Because SI allows us to substitute identical terms, if we have a formula of the form a = b on one line and Saa on another, then we can also have Sba, Sab, and Sbb. (12.5.5) “The first identity rule has two functions. The first is to close any branch with a line of the form aa. The second is to allow us, given an identity, to interchange a and b: a = b ; a = a ; b = a  [...] This allows us to close any tableau with lines of the form a = b and b a, and also to apply SI substituting a for b, instead of the other way around” (273). Note, in our tableau, we will not add lines of a = a in order to close branches with ¬a = a. Rather, if we obtain ¬a = a, we simply close them there. Likewise, if on a branch we have a = b and b a, we will simply close the branch there without the added steps. (12.5.6) We will use the comments in the above two sections in our frequent encounters with tableaux for identity. (12.5.7) Priest next gives a tableau example for a valid formula. (12.5.8) Priest then gives a tableau example for an invalid formula. (12.5.9) “To read off a counter-model from an open branch, we take a domain which contains a distinct object, ∂b, for every constant, b, on the branch. v(b) is ∂b. v(P) is the set of n-tuples ⟨∂b1, . . . , ∂bn⟩ such that Pb1 . . . bn occurs on the branch. Of course, if ¬Pb1 . . . bn is on the branch, ⟨∂b1, . . . , ∂bn⟩ ∉ v(P), since the branch is open. (If a predicate or constant does not occur on the branch, the value given to it by v is a don’t care condition: it can be anything one likes.)” (p.268). However, “whenever we have a bunch of lines of the form a = b, b = c, etc., we choose one of the constants, say, a, and let all the constants in the bunch denote ∂a” (274). (12.5.10) Priest gives an example counter-model. (12.5.11) The tableaux are sound and complete.

 

 

12.6

Some Philosophical Issues

 

(12.6.1) We will now examine some problems with classical first-order semantics. (12.6.2) One problem with classical first-order semantics is the following. A standard interpretation of ∃x is ‘There exists an x such that’. This means that if we have ∃xA, it tells us that something does exist which satisfies A. Furthermore, it is a logical truth in classical first-order semantics that ∃x(A ∨ ¬A). This means that within these semantics, we are forced to hold that something must exist that would satisfy either A or its negation. That furthermore implies that we have to conclude that no matter what, something must exist. But that claim does not seem like a logical truth, because we can think that it is possible that nothing exists. To deal with this problem, we cannot simply allow the domain of quantification to be empty, because that makes us unable to assign constants some denotation. Another solution that we explore later is to make the evaluation function v be a partial function, meaning that it has no value for some constants. (12.6.3) Another problem with first-order classical semantics is that Ax(a) ⊨ ∃xA is valid, meaning that anything that can be predicated must exist. But Pegasus can be predicated (for example as a mythological figure), but it does not in fact exist. (12.6.4) Even if one objects that the problem with the Pegasus example is that we wrongly think that existence is a proper predicate, we can still find true sentences where there are other sorts of predicates for objects that nonetheless denote non-existing things, like “Sherlock Holmes is a character in a work of fiction.” The denotation calls for Holmes to be in the domain, but his fictionality calls for him not to be in the domain. (12.6.5) Another problem is that on account of the validity of the substitutivity of identicals, the following could be a valid inference: a = b, ‘a’ is the first letter of the alphabet; so ‘b’ is the first letter of the alphabet. However, this argument can be easily dispelled by claiming that the quotational usage here is not proper for first-order logic. (12.6.6) This problem with the substitutivity of identicals has a more stubborn form. Suppose we have a picture of a person as a baby, and we call them a. And we can say that it is true that a is a baby. Now suppose also that we have a picture of an adult, and we call them b. And then we are informed that a and b are the same person, and thus a = b. The substitutivity of identicals would say that because a is a baby, b must be one too. But that cannot be so, because b is an adult and thus not a baby. (12.6.7) We can solve this problem by making time designations in our predications. We can say either that a-at-time-t is a baby or a is a-baby-at-time-t. In either case, even if we use the substitutivity of identicals, what we will get is something still true: b-at-time-t is a baby or b is a-baby-at-time-t. (12.6.8) But the use of temporal designations does not work in cases where we have intention states. We can have mental states where we are thinking about the novelist George Eliot. Later we can learn that George Eliot is the pen name for Mary Anne Evans. That establishes an equivalence, and on account of the substitutivity of identicals, we should be able to say that back before you leaned this equivalence, whenever you were thinking about George Eliot, you were also thinking about Many Anne Evans. But that is not really what was going on in your mind. (12.6.9) Priest ends by noting that we will return to these problems in later chapters.

 

 

 

13.

Free Logics

 

13.1

Introduction

 

(13.1.1) “The family of free logics is a family of systems of logic that dispense with a number of the existential assumptions of classical logic” (290). (13.1.2) We will examine the semantics of free logics and their tableau systems too. (13.1.3) We will also discuss philosophical questions regarding free logics and existence. (13.1.4) For most of the chapter, we do not include the identity predicate, but we add it at the end to see what happens.

 

 

13.2

Syntax and Semantics

 

(13.2.1) Free logics have the same vocabulary as classical first-order logics.

• variables: v0, v1, v2, ...
• constants: k0, k1, k2, ...
• for every natural number n > 0, n-place predicate symbols: P0n, P1n, P2n, ...
• connectives: ∧, ∨, ¬, ⊃, ≡
• quantifiers: ∀, ∃
• brackets: (, )

We will call ∀ and ∃ the universal and particular quantifiers, respectively.

(p.263, section 12.2.1)

But in free logics we have the one-place existence predicate ℭ. We can think of ℭa as meaning ‘a exists’. (13.2.2) In free logics, we have our main domain of all objects, D, and we have the “inner domain” E. It is a a subset of D that we think of as being the set of all existent objects. (“An interpretation for the language is a triple ⟨D, E, v⟩, where D is a non-empty set, and E (the ‘inner domain’) is a (possibly empty) subset of D. One can think of D as the set of all objects, and E as the set of all existent objects” (290).) So suppose D contains Sherlock Holmes, the Pegasus, and Julius Caesar. Here, although all of them are in D, only Caesar is in E. (13.2.3) “As in classical logic, v assigns every constant in the language a member of D, and every n-place predicate a subset of Dn. In any interpretation, v(ℭ) = E” (290). (13.2.4) The truth conditions in free logics are the same as for classical logic except that the ones for the quantifiers only concern the domain of existents.

The truth conditions for (closed) atomic sentences are:

v(Pa1 ... an) = 1 iff  ⟨v(a1), ..., v(an)⟩ ∈ v(P) (otherwise it is 0)

(p.265, section 12.3.2)

vA) = 1 if v(A) = 0, and 0 otherwise.
v(A B) = 1 if v(A) = v(B) = 1, and 0 otherwise.
v(A B) = 1 if v(A) = 1 or v(B) = 1, and 0 otherwise.
v(A B) = 1 if v(A) = 0 or v(B) = 1, and 0 otherwise.
v(A B) = 1 if v(A) = v(B), and 0 otherwise.

(p.5, section 1.3.2)

v(∀xA) = 1 iff for all d E, v(Ax(kd)) = 1 (otherwise it is 0)

v(∃xA) = 1 iff for some d E, v(Ax(kd)) = 1 (otherwise it is 0)

(291)

(13.2.5) “An inference is semantically valid if it is truth-preserving in all interpretations, as in classical logic” (291). (13.2.6) If C is some set of constants such that every object in D has a name in C, then:

v(∀xA) = 1 iff for all cC such that v(ℭc) = 1, v(Ax(c)) = 1 (otherwise it is 0)

v(∃xA) = 1 iff for some cC such that v(ℭc) = 1, v(Ax(c)) = 1 (otherwise it is 0)

(291)

 

 

13.3

Tableaux

 

(13.3.1) The tableau rules for free logics are the same as for classical logic, except the rules for universal and particular instantiation are different. (Below I compile all the rules.)

 

 Double Negation

Development (¬¬D)

¬¬A

A

 

Conjunction

Development (D)

A ∧ B

A

B

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B)

↙   ↘

¬A       ¬B

 

 Disjunction

Development (∨D)

A ∨ B

↙   ↘

A      B

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B)

¬A

¬B

 

 Conditional

Development (⊃D)

A ⊃ B

↙    ↘

¬A        B

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B)

A

¬B

 

 Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

B        ¬B

 

 Negated Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

¬B         B

 

 Negated Existential

Development (¬∃D)

¬∃xA

x¬A

 

 Negated Universal

Development (¬∀D)

¬xA

x¬A

 

 Universal Instantiation

Development (UI,D)

xA

↙    ↘

¬ℭa      Ax(a)

 

where a is any constant on the branch (choosing a new constant only if there are none there already)

 

 Particular Instantiation

Development (PI,D)

xA

ℭc

Ax(c)

 

where c is a constant new to the branch

(6-8; 266; 291, with names and additional text at the bottom made by me)

 

(13.3.2) Priest next gives an example tableau for a valid inference. (13.3.3) Priest then gives a couple example tableaux for invalid inferences. (13.3.4) “To read off a counter-model from an open branch, we take a domain which contains a distinct object, ∂b, for every constant, b, on the branch. v(b) is ∂b. v(P) is the set of n-tuples ⟨∂b1, . . . , ∂bn⟩ such that Pb1 . . . bn occurs on the branch. Of course, if ¬Pb1 . . . bn is on the branch, ⟨∂b1, . . . , ∂bn⟩ ∉ v(P), since the branch is open. (If a predicate or constant does not occur on the branch, the value given to it by v is a don’t care condition: it can be anything one likes.)” (p.268). And, E = v(ℭ). (13.3.5) Priest next provides a couple example counter-models.

 

 

13.4

Free Logics: Positive, Negative and Neutral

 

(13.4.1) Free logics do not have two problematic inferences in classical first-order logic. They do not have as a logical truth that ∃x(Px ∨ ¬Px), in other words, that it is impossible to have nothing existing. And they do not have the inference Ax(a) ⊨ ∃xA, in other words, that anything that takes a predication must be an existing thing. Free logics avoid these problems by allowing there to be non-existing things in the domain. (13.4.2) Some might still want to use free logics to accommodate non-existing things, but they might think that non-existing things should not have positive properties. For, while existing things have such tangible, physical properties that allow them to be seen and be physically interactable, non-existing things do not. (So we might want to say that Sherlock Holmes is in our domain, but we might also want to say that as a non-existing object, he cannot actually live on Baker St. For, only physically real things can have spatial location.) To disallow non-existing objects from having positive properties, we could apply the negativity constraint: If ⟨d1, . . . , dn⟩ ∈ v(P) then d1v(ℭ), and …and dnv(ℭ). (In other words, if something belongs to a predicate, it needs to be an existent thing.) Free logics with the negativity constraint are called negative free logics. (13.4.3) The tableau rules for negative free logics are all those for unrestricted free logic plus the Negativity Constraint Rule (NCR), which allows for the the characteristic inference of negative free logics: Pa1 . . . ai . . . an ⊢ ∃xPa1 . . . x . . . an. (Below I compile all the rules.)

 

 Double Negation

Development (¬¬D)

¬¬A

A

 

Conjunction

Development (D)

A ∧ B

A

B

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B)

↙   ↘

¬A       ¬B

 

 Disjunction

Development (∨D)

A ∨ B

↙   ↘

A      B

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B)

¬A

¬B

 

 Conditional

Development (⊃D)

A ⊃ B

↙    ↘

¬A        B

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B)

A

¬B

 

 Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

B        ¬B

 

 Negated Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

¬B         B

 

 Negated Existential

Development (¬∃D)

¬∃xA

x¬A

 

 Negated Universal

Development (¬∀D)

¬xA

x¬A

 

 Universal Instantiation

Development (UI,D)

xA

↙    ↘

¬ℭa      Ax(a)

 

where a is any constant on the branch (choosing a new constant only if there are none there already)

 

 Particular Instantiation

Development (PI,D)

xA

ℭc

Ax(c)

 

where c is a constant new to the branch

 

 Negativity Constraint Rule(NCR,D)

Pa1 ... an

ℭa1

ℭan

 

(6-8; 266; 291; 293, with names and additional text at the bottom made by me)

 

(13.4.4) Priest next gives an example tableau for an invalid inference, and he states that we construct countermodels in the same way as before: “To read off a counter-model from an open branch, we take a domain which contains a distinct object, ∂b, for every constant, b, on the branch. v(b) is ∂b. v(P) is the set of n-tuples ⟨∂b1, . . . , ∂bn⟩ such that Pb1 . . . bn occurs on the branch. Of course, if ¬Pb1 . . . bn is on the branch, ⟨∂b1, . . . , ∂bn⟩ ∉ v(P), since the branch is open. (If a predicate or constant does not occur on the branch, the value given to it by v is a don’t care condition: it can be anything one likes.)” (p.268). And, E = v(ℭ) (p.292). Priest then provides an example counter-model. (13.4.5) “The tableaux for positive and negative free logics are sound and complete with respect to their semantics” (294). (13.4.6) But negative free logics do not account for why sentences such as “Homer worshipped Zeus” are intuitively true even though negative free logics would make them false on account of the fact that it is a non-existent object that is being predicated. (13.4.7) An alternative to negative free logics would be neutral free logics, which say that sentences containing names that do not refer to existent objects would be neither true nor false. We deal with this in ch.21.

 

 

13.5

Quantification and Existence

 

(13.5.1) We might want a free logic where quantifiers range over all objects and not just existent ones. (13.5.2) Quantifiers ranging over the outer domain D are called the outer quantifiers, and they are written as ∃ and ∀. The quantifiers that range over the inner domain E are called inner quantifiers, and they are written as ∃E and ∀E. (13.5.3) We read ∀xA as ‘Every x is such that A’; ∀ExA as ‘Every existent x is such that A’; ∃xA as ‘Some x is such that A’ or as ‘Something is A’; and ∃ExA as ‘there exists an x such that A’ or as ‘there is an x such that A’. (13.5.4) We should not think that the existential quantifier of natural language necessarily implies existence. (13.5.5) There is an argument for reading the existential quantifier as “there exists”. The argument wants to avoid problems like the ontological argument, so it does not allow existence to be a predicate. Instead, it sees as the only other viable option for expressing existence as being the existential quantifier. Part of the thinking is that only things that are there can be predicated. But this is not a convincing argument, because there are many examples of predication of non-existing objects, like Zeus being worshipped. (13.5.6) If we wish, we can define inner quantifiers in terms of outer ones, which means that “in a free logic with outer quantifiers, we can dispense with inner quantifiers altogether,” namely, in the following way:

ExA     x(ℭxA)

ExA     x(ℭxA

(p297). However, “There is no way of defining outer quantifiers in terms of inner quantifiers” (297). (13.5.7) These new semantics make one problematic inference no longer problematic, namely, Ax(a) ⊨ ∃xA, now meaning that if something can be predicated, it is either an existent or non-existent object (previously it implied that any predicable thing must be existent). But it may not make the logical truth ∃x(A ∨ ¬A) unproblematic (it implies now that there must be at least a non-existent object, while before it implied there must be at least an existent object.)

 

13.6

Identity in Free Logic

 

(13.6.1) In our free logics, identity will be defined like in classical logic as: v(=) = {⟨D, d⟩ :dD}, and it will have the same properties and tableau rules as in classical logic too.

 

 Double Negation

Development (¬¬D)

¬¬A

A

 

Conjunction

Development (D)

A ∧ B

A

B

 

 Negated Conjunction

Development (¬D)

¬(A ∧ B)

↙   ↘

¬A       ¬B

 

 Disjunction

Development (∨D)

A ∨ B

↙   ↘

A      B

 

 Negated Disjunction

Development (¬D)

¬(A ∨ B)

¬A

¬B

 

 Conditional

Development (⊃D)

A ⊃ B

↙    ↘

¬A        B

 

Negated Conditional

Development (¬⊃D)

¬(A ⊃ B)

A

¬B

 

 Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

B        ¬B

 

 Negated Biconditional

Development (≡D)

A ≡ B

↙    ↘

A        ¬A

¬B         B

 

 Negated Existential

Development (¬∃D)

¬∃xA

x¬A

 

 Negated Universal

Development (¬∀D)

¬xA

x¬A

 

 Universal Instantiation

Development (UI,D)

xA

↙    ↘

¬ℭa      Ax(a)

 

where a is any constant on the branch (choosing a new constant only if there are none there already)

 

 Particular Instantiation

Development (PI,D)

xA

ℭc

Ax(c)

 

where c is a constant new to the branch

 

Principle of Identity

Development (=D)

.

a = a

 

(You can always add a line of the form a = a)

 

Substitutivity of Identicals (SI,D)

a = b

Ax(a)

Ax(b)

 

where A is any atomic sentence distinct from a = b.

(6-8; 266, 272, 291, with names and additional text at the bottom made by me)

 

(13.6.2) Negative free logics are constrained by the Negativity Constraint, which says that if something belongs to a predicate, it needs to be an existent thing. That means whenever a does not exist, a = a is false, because we cannot predicate it of identity when it is non-existent. (13.6.3) For negative free logics, the extension of the identity predicate is limited to existent things: v(=) = {⟨d, d⟩ : d E}.  We must change the first tableau identity rule to the “Self-Identity of Existents,” which requires a line of the form ℭa before we can obtain a = a.

 

Self-Identity of Existents (SIE)

ℭa

a = a

 

(You can always add a line of the form a = a if you already have ℭa)

 

The second identity rule on the substitutivity of identicals stays the same. But to close a branch, it is not enough to merely have aa. It would only close on that basis if in addition to aa we also have ℭa. (Note that we probably also need the Negativity Constraint Rule too:)

 

 Negativity Constraint Rule(NCR,D)

Pa1 ... an

ℭa1

ℭan

(p.293, section 13.4.3, with name added at the top)

 

(13.6.4) Priest next gives an example tableau for a valid formula in negative free logic and another for an invalid formula. (13.6.5) We form a counter-model from an open branch in the following way: “given a bunch of identities, a = b, b = c, . . . on a branch, one chooses a single object for all the constants in the bunch to denote. For every predicate, P, excluding identity (but including ℭ), ⟨∂a1, . . . , ∂an⟩ ∈ v(P) iff Pa1 . . . an is on the branch; E = v(ℭ); and v(=) comprises the set of all pairs ⟨d, d⟩, where D is any object in E” (298-299). (13.6.6) These tableaux for identity are sound and complete. (13.6.7) The application of the Negativity Constraint to identity would make certain formulations false, in correspondence with our intuitions. For example, it makes Sherlock Holmes = Pegasus false. However, it still produces counter-examples similar to ones normally found in negative free logics. So we would want Father Christmas = Santa Claus or Santa Claus = Santa Claus to be true, but the negativity constraint makes them false. (13.6.8) The substitutivity of identicals remains valid regardless of which treatment of identity we choose, and thus we still have problems dealing with situations where something develops so much that it remains identical but is not substitutable with regard to its predications at the beginning and end of its development. For example, one same person begins as a baby and ends as an adult. But it seems odd that we are allowed then to say that the adult is a baby. (13.6.9) “With just outer quantifiers, free logic is just classical logic plus a distinguished predicate for existence. And in positive free logic, even this predicate satisfies no special semantic conditions. The only difference is therefore simply one of informal interpretation” (299). (13.6.10) “With just inner quantifiers, consider a free logic interpretation – positive or negative, with or without identity – where D = E; this is a classical interpretation. Hence, any inference (not involving ℭ) that is valid in the logic is valid in classical logic” (299). (13.6.11) But, “there is a limited relationship in the other direction;” for example, “∀xPx ⊭ ∃xPx […] but this is classically valid” (299).





ch.14

Constant Domain Modal Logics

 

14.1

Introduction

 

(14.1.1) We will examine quantified normal modal logics, of which there are two kinds: {1} constant domain quantified normal modal logics, in which “the domain of quantification is the same in all worlds”, and {2} variable domain quantified normal modal logics, in which “the domain may vary from world to world” (308). (14.1.2) “If S is any system of propositional modal logic, CS will denote the constant domain quantified version, and VS will denote the variable domain quantified version” (308). (14.1.3) Here we just examine the semantics and tableaux for constant domain logics, and in the next chapter we do those for variable domain logics. (14.1.4) We exclude identity from our language for now, but we return to it in chapter 16. (14.1.5) We will also examine essentialism. (14.1.6) We lastly expand to tense logic.



14.2

Constant Domain K

 

(14.2.1) Our quantified modal logic will augment the language of first-order classical logic with the modal operators □ and ◊.

Our first-order language has the following vocabulary:

• variables: v0, v1, v2, ...
• constants: k0, k1, k2, ...
• for every natural number n > 0, n-place predicate symbols: P0n, P1n, P2n, ...
• connectives: ∧, ∨, ¬, ⊃, ≡
• quantifiers: ∀, ∃
• brackets: (, )

Specifically we may use:

x, y, z for arbitrary variables

a, b, c for arbitrary constants

Pn, Qn, Sn for arbitrary n-place predicates

A, B, C for arbitrary formulas

• Σ, Π for arbitrary sets of formulas

Its grammar includes the following:

• Any constant or variable is a term.

The formulas are specified recursively as follows.

• If t1, ... , tn are any terms and P is any n-place predicate, Pt1 .. tn is an (atomic) formula.

• If A and B are formulas, so are the following:

(A B), (A B), ¬⁠A, (A B), (A B).

• If A is any formula, and x is any variable, then ∀xA, ∃xA are formulas. I will omit outermost brackets in formulas.

And regarding quantified formulas:

• An occurrence of a variable, x, in a formula, is said to be bound if it occurs in a context of the form ∃x ... x ... or ∀x ... x ....

• If it is not bound, it is free.

• A formula with no free variables is said to be closed.

Ax(c) is the formula obtained by substituting c for each free occurrence of x in A.

(mostly quotation from pp.263-264, section 12.2)

To all this we add the modal operators □ and ◊:

Intuitively, □A is read as ‘It is necessarily the case that A’; ◊A as ‘It is possibly the case that A’.

If A  is a formula, so are □A and ◊A.

(p.21, sections 2.3.1, 2.3.2).

(14.2.2) “An interpretation for the language is a quadruple ⟨D, W, R, v⟩. W is a (non-empty) set of worlds, and R is a binary accessibility relation on W, as in the propositional case. D is the non-empty domain of quantification, as in classical first-order logic. v assigns each constant, c, of the language a member, v(c), of D, and each pair comprising a world, w, and an n-place predicate, P, a subset of Dn. I will write this as vw(P). Intuitively, vw(P) is the set of n-tuples that satisfy P at world w – which may change from world to world. (Thus, ⟨Caesar, Brutus⟩ is in the extension of ‘was murdered by’ at this world, but in a world where Brutus was not persuaded to join the conspirators, it is not.) The language of an interpretation, , is obtained by adding a constant to the language for every member of D” (309). (14.2.3) “Each closed formula, A, is now assigned a truth value, vw(A), at each world, w. The truth conditions for atomic formulas are as follows: vw(Pa1 . . . an) = 1 iff ⟨v(a1), . . . , v(an)⟩ ∈ vw(P) (otherwise it is 0). The truth conditions for the connectives and modal operators are as in the propositional case” and for quantifiers as in first-order logic.

vwA) = 1 if vw(A) = 0, and 0 otherwise.

vw(AB) = 1 if vw(A) = vw (B) = 1, and 0 otherwise.

vw(AB) = 1 if vw(A) = 1 or vw (B) = 1, and 0 otherwise.

[…]

vw(◊A) = 1 if, for some w′W such that wRw′, vw′(A) = 1; and 0 otherwise.

vw(□A) = 1 if, for all w′ ∈ W such that wRw′, vw′(A) = 1; and 0 otherwise.

vw(∀xA) = 1 iff for all d D, vw(Ax(kd)) = 1 (otherwise it is 0)

vw(∃xA) = 1 iff for some d D, vw(Ax(kd)) = 1 (otherwise it is 0)

(pp.21-22, sections 2.3.4, 2.3.5, 309)

(14.2.4) “An inference is valid if it is truth-preserving in all worlds of all interpretations” (309). (14.2.5) “The above semantics define the constant domain modal logic CK, corresponding to the propositional logic K” (309).





ch.15

Variable Domain Modal Logics

 

15.1

Introduction

 

(15.1.1) We turn now to variable domain quantified normal logics. (15.1.2) “We will start with K and its normal extensions. Next we observe how matters can be extended to tense logic” (329). (15.1.3) “There are then some comments on other extensions of the logics involved.” (329). (15.1.4) “The chapter ends with a brief discussion of two major philosophical issues that variable domain semantics throw into prominence: the question of existence across worlds, and the connection (or lack thereof) between existence and the particular quantifier” (329).



15.2

Prolegomenon

 

(15.2.1) We might object to constant domain semantics by saying that “Just as the properties of objects may vary from world to world, what exists at a world, it is natural to suppose, may vary from world to world. Thus, I exist at this world, but in a world where my parents never met, I do not exist. Or, at this world, Sherlock Holmes does not exist, but in a world that realises the stories of Arthur Conan Doyle, he does” (329). (15.2.2) Another way to make the point that existence should be able to vary between worlds involves the Barcan Formula and the Converse Barcan Formula. (15.2.3) One solution is to say that the domain of quantification varies from world to world. But this presents a problem for universal instantiation when we universally quantify for a predicate in a world while certain objects are not in that world but are in other worlds. (15.2.4) The best solution for this issue of existence varying from world to world is using free logic and thus to include the existence predicate, ℭ.

 

 

15.3

Variable Domain K and its Normal Extensions

 

(15.3.1) “[A] variable domain interpretation is a quadruple ⟨D, W, R, v⟩. D, W, R and v are the same as in the constant domain case, with the exception that for every wW, v maps w to a subset of D, that is, | v(w) ⊆ D. v(w) is the domain at world w. I will write it as Dw. Note that for any n-place predicate, P, vw(P) ⊆ Dn (not Dnw), and vw(ℭ) is always Dw” (330-331). (15.3.2) “The truth conditions for atomic sentences, truth-functional and modal operators, are as in the constant domain case. Those for the quantifiers (as is to be expected) are” (included in the following):

vwA) = 1 if vw(A) = 0, and 0 otherwise.

vw(AB) = 1 if vw(A) = vw (B) = 1, and 0 otherwise.

vw(AB) = 1 if vw(A) = 1 or vw (B) = 1, and 0 otherwise.

[…]

vw(◊A) = 1 if, for some w′W such that wRw′, vw′(A) = 1; and 0 otherwise.

vw(□A) = 1 if, for all w′ ∈ W such that wRw′, vw′(A) = 1; and 0 otherwise.

(pp.21-22, sections 2.3.4, 2.3.5)

vw(∃xA) = 1 iff for some dDw, vw(Ax(kd)) = 1

vw(∀xA) = 1 iff for all dDw, vw(Ax(kd)) = 1

(331)

(15.3.3) “Semantic validity is defined in terms of truth preservation at all worlds of all interpretations, as in the constant domain case” (331). (15.3.4) “These semantics give the variable domain version of the propositional logic K, VK” (331). (15.3.5) “Adding constraints on the accessibility relation produces the extensions VKρ, VKρσ, etc.” (331).





ch.16

Necessary Identity in Modal Logic

 

16.1

Introduction

 

(16.1.1) We turn now to identity in modal and tense logic. There are two kinds of modal logic semantics for identity: {1} necessary (or “world-invariant”) and {2} contingent (or “world-variant”). (16.1.2) “If S is any system of logic without identity, S(NI) will denote the system augmented by necessary identity, and S(CI) will denote the system of logic augmented by contingent identity” (349). (16.1.3) “We will assume, first, that the Negativity Constraint is not in operation. We will then see how its addition affects matters” (349). (16.1.4) “Next, we will look at the distinction between rigid and non-rigid designators, and see how non-rigid designators can be added to the logic” (349). (16.1.5) At the end of the chapter “there is a short philosophical discussion of how this distinction applies to names and descriptions in a natural language such as English” (350).

 

 

16.2

Necessary Identity

 

(16.2.1) We will now define the identity predicate in a quantified normal modal logic. (16.2.2) “The denotation of the identity predicate is the same in every world, w, of an interpretation: vw(=) = {⟨d, d⟩ : dD}. ” (350). (16.2.3) There are three tableau rules for identity (see below).

 

Principle of Identity

Development (=D)

.

a = a,i

 

(You can always add a line of the form a = a,i)

 

Substitutivity of Identicals (SI,D)

a = b,i

Ax(a),i

Ax(b),i

 

(where A is any atomic sentence distinct from a = b.)

(Note: the world index on every line is the same, so substitution is licensed only within a world.)

 

Identity Invariance Rule (IIR,D)

a = b,i

a = b,j

 

(where j is any world parameter on the branch distinct from i)

(350, with names and additional text at the bottom made by me)

 

(16.2.4) Priest next gives two example tableaux for valid formulas in VK(NI), which is a variable domain system with necessary identity. And, “For future reference, we will call the formula ∀xy(x = y ⊃ □x = y) NI (Necessary Identity)” (351). (16.2.5) Priest next gives an example tableau for an invalid formula. (16.2.6) “Counter-models are read off from open branches as usual. In particular, where there is a bunch of lines of the form a = b, 0, b = c, 0, etc., a single denotation is provided for all the constants” (352). (16.2.7) Priest next gives an example counter-model.



16.3

The Negativity Constraint

 

(16.3.1) “In this section, we will see how the addition of the Negativity Constraint affects matters” (352). (16.3.2) “In the presence of the [negativity] constraint, non-existent objects cannot be in the extension of the identity predicate. Hence, vw(=) = {⟨d, d ⟩ : d vw(ℭ)}” (352). (16.3.3) Priest next gives the identity rules for our negatively constrained system. (16.3.4) Priest then provides an example tableau in VK(NI) with the negativity constraint for a valid formula. (16.3.5) When we have the negativity constraint, necessary identity is invalidated. (16.3.6) “To read off a counter-model from an open branch of a tableau when the Negativity Constraint is in operation, we give constants the same denotation provided they are said to be the same at some world. Thus, for | example, if we have a = b,i and b = c,j, we give a, b and c the same denotation” (354).



16.4

Rigid and Non-rigid Designators

 

(16.4.1) There is a standard objection to quantified modal logic, namely, that it leads to claims of necessity regarding matters that are really contingent, on account of the workings of necessary identity. For example, “Beethoven wrote nine symphonies. Therefore 9 = β, where β is ‘the number of symphonies that Beethoven wrote’. Given NI, ∀xy(x = y ⊃ □x = y), it follows that □9 = β; that is, necessarily the number of Beethoven symphonies is nine – which is false, since Beethoven could have died immediately after writing the eighth” (354). (16.4.2) The negativity constraint will not prevent this problem; for, even under the constraint, a = b ⊃ □(ℭa ⊃ a = b) is still valid, and so we must draw the same conclusion: “Since 9 = β, it still follows that □(ℭ9 ⊃ 9 = β), and so □ ℭ9 ⊃ □9 = β” (354). (16.4.3) Priest diagnoses the problem in the following way: “What has gone wrong with the argument is, in fact, that the noun phrase β, ‘the number of symphonies written by Beethoven’ is a noun phrase that may change its denotation from world to world. In some worlds, Beethoven wrote eight symphonies, in some two, in some 147” (354). (16.4.4) When we consider a constant as having world-invariant denotation (like β,  ‘the number of symphonies written by Beethoven’, being understood as being 9 in all worlds), it is a rigid designator, and we write it under the form: v(c). However, constants that do vary with the world are called non-rigid designators (like β,  ‘the number of symphonies written by Beethoven’, being understood as potentially taking a different value in different worlds, like 2, 9, or 147), and we write them accordingly under the form vw(c). (“Compare predicates, where extensions may change from world to world, and we write vw(P), not v(P))” (354-355). (16.4.5) We then “augment the language with a collection of new constants: α0, α1, α2, . . . and call these descriptor constants, or just descriptors. I will use α, β, γ , . . . for arbitrary descriptors. I will call our old constants rigid constants. The terms of the language now comprise descriptors, rigid constants and variables” (355). (16.4.6) Priest next gives the truth-conditions for our semantics.

vwA) = 1 if vw(A) = 0, and 0 otherwise.

vw(AB) = 1 if vw(A) = vw (B) = 1, and 0 otherwise.

vw(AB) = 1 if vw(A) = 1 or vw (B) = 1, and 0 otherwise.

[…]

vw(◊A) = 1 if, for some w′W such that wRw′, vw′(A) = 1; and 0 otherwise.

vw(□A) = 1 if, for all w′ ∈ W such that wRw′, vw′(A) = 1; and 0 otherwise.

(pp.21-22, sections 2.3.4, 2.3.5)

vw(∃xA) = 1 iff for some dDw, vw(Ax(kd)) = 1

vw(∀xA) = 1 iff for all dDw, vw(Ax(kd)) = 1

(p.331, section 15.3.2)

vw(Pt1 . . . tn) = 1 iff ⟨vw(t1), . . . , vw(tn)⟩ ∈ vw(p)

(355)

(16.4.7) Priest then explains how we modify the tableau rules to accommodate descriptors: {1} “the IIR applies only if both terms are rigid constants”; {2} “the rules of universal and particular instantiation (and the NCR if it is present) apply only to rigid constants”; and {3} there is a new identity rule.

 

Constant-Descriptor Identity (CDI,D)

.

c = α,i

 

(c is a constant new to the branch. This rule is applied to every descriptor, α, on the branch, and every i on the branch, for which there is not already a line of this form.)

(355, with names and additional text at the bottom made by me. The name is my own fabrication and probably needs correction.)

 

(16.4.8) Priest then gives an example tableau for a valid inference in CK(NI) with designators. (16.4.9) Then Priest gives an example tableau for an invalid formula. (16.4.10) “We read off a counter-model from an open branch of a tableau as before. In addition, if there is a line of the form c=β,i on the tableau, we set vwi (β) to v(c). (Note that if we have lines of the form c1=β,i and c2=β,i, then we have a line of the form c1=c2,i, by SI, so v(c1) = v(c2).)” (356). (16.4.11) Priest then gives an example counter-model. (16.4.12) “Note that various quantifier inferences that hold for rigid constants may fail for descriptors. Thus, □Pα ⊬CKxPx” (357). (16.4.13) These tableaux are sound and complete.



16.5

Names and Descriptions

 

(16.5.1) We now wonder about classifying noun-phrases as being either rigid or non-rigid designators. Definite descriptions, like “the number of symphonies composed by Beethoven” are usually non-rigid (although with some exceptions, like “the least natural number,” which is 0 in all worlds). (16.5.2) So descriptions are often non-rigid. We wonder now about proper names. They are often understood as non-rigid. But a description assigned to a proper name, like “teacher of Alexander the Great” as being the equivalent of Aristotle, can take a different value in a possible world where someone other than Aristotle teaches Alexander, thereby making the logical tautology “The teacher of Alexander the Great is the teacher of Alexander the Great” false (given the equivalence of “Aristotle” and “teacher of Alexander the Great”. (16.5.3) “It is therefore plausible to suppose that proper names in a natural language (at least when appropriately disambiguated to a particular object) are rigid designators. Thus, they latch on to the object they denote, not via some implicit descriptive content, but by a more direct mechanism” (358). (16.5.4) In Kripke’s account, the coiner of a name baptizes the denoted object with that name, which then refers to its denoted object rigidly in all worlds. “(They may single x out with a certain description, but if they do, in any other world the name still refers to x, not to whatever satisfies the description at that world.)” (So perhaps in the Aristotle example, when we define him as “the teacher of Alexander the Great,” in the world where Alexander has a different teacher, this noun-phrase still refers to Aristotle and not to the other teacher.) There is then a causal interaction between speakers that communicates that rigid denotation, and thus it is called the causal theory of reference. (16.5.5) However, the causal theory of reference does not explain cases where the transmission of designations breaks down, like how a miscommunication led to the island now called Madagascar getting this name by means of a misunderstanding between the European explorers who asked what the island’s name was and the Africans who were given the impression that they should give the name for some region on the African mainland. (In other words, the naming was not caused by transmissions linking back to a baptism, as the theory suggests it should happen.)




ch.21

Many-valued Logics

 

21.1

Introduction

 

(21.1.1) We turn now to many-valued logics. (21.1.2) We examine many-valued quantification logics in general and later 3-valued logics in particular. (21.1.3) After that we turn to free versions of many-valued logics. (21.1.4) And afterward we examine identity in these logics. (21.1.5) We end with some discussion on supervaluation and subvaluation.



21.2

Quantified Many-valued Logics

 

(21.2.1) A “propositional many-valued logic is characterised by a structure ⟨V, D,{fc: cC}⟩, where V is the set of truth values, D V is the set of designated values, and for each connective, c, fc is the truth function it denotes. An interpretation, v, assigns values to propositional parameters; | the values of all formulas can then be computed using the fcs; and a valid inference is one that preserves designated values in every interpretation” (456-457). (21.2.2) “A quantified many-valued logic is characterised by a structure of the form ⟨D, V, D,{fc: cC}, {fq: qQ}⟩. V, D, and {fc: cC} are as before. D is a non-empty domain of quantification, and if q is the set of quantifiers in the language, for every qQ, fq is a map from subsets of V into V. (In a free many-valued logic, there is an extra component, the inner domain, E, and E D.)” (457). (21.2.3) An “evaluation, v, assigns every constant a member of D and every n-place predicate an n-place function from the domain into the truth values. (So if P is any predicate, v(P) is a function with inputs in D and an output in V.) Given an evaluation, every formula, A, is then assigned a value, v(A), in V recursively, as follows. If P is any n-place predicate: v(Pa1 . . . an) = v(P)(v(a1), . . . , v(an)) . For each n-place propositional connective, cv(c(A1, . . . , An)) = fc(v(A1), . . . , v(An)) as in the propositional case. And for each quantifier, q: v(qxA) = fq({v(Ax(kd)): d D}) . (In a free many-valued logic, ‘D’ is replaced by ‘E’.)” (357). (21.2.4) An “inference is valid if it preserves designated values. Thus, Σ ⊨ A iff for every interpretation, whenever v(B) ∈ D, for all B ∈ Σ, v(A) ∈ D” (457).

 

21.3

∀ and ∃

 

(21.3.1) We now wonder, how does quantification in many-valued quantified logic behave (or more precisely, how do the functions for the quantifiers, f and f, behave)? (21.3.2) “In classical logic, the universal quantifier acts essentially like a conjunction over all the members of the domain. So ∀xA is something like Ax(kd1) ∧ Ax(kd2) ∧ . . . , where d1, d2, . . . are all the members of the domain.” “Dually, the particular quantifier is something like a disjunction over all members of the domain: ∃xA is Ax(kd1) ∧ Ax(kd2) ∨ . . .” (458). We would expect that the universal and particular quantifiers behave the same way many-valued logics. (21.3.3) We can use greatest lower bound (Glb) of conjuncts (that is, the greatest truth-value that is less than or equal to the values assigned to the conjuncts) and least upper bound (Lub) of disjuncts (that is, the least truth-value greater than or equal to the value assigned to either disjunct) in order to evaluate universal and particular quantification, respectively: we define “f(X) as Glb(X), so that v(∀xA) is the greatest lower bound of {v(Ax(kd)): d D}” and we “define f(X) as Lub(X), and v(∃xA) is the least upper bound of {v(Ax(kd)): d D}” (458).



21.4

Some 3-valued Logics

 

(21.4.1) We can evaluate quantification in 3-valued logics in the following way. We rank the values as: 0 < i < 1. And: “v(∀xA) = Glb({v(Ax(kd)) : d D}); and because this set is finite (it can have at most three members), and the values are linearly ordered, the greatest lower bound is the minimum (Min) of these values. Similarly, v(∃xA) is the maximum (Max) of the values in the set. Thus, ∀xA takes the value 1 if all instantiations with the constants kd take the value 1; it takes the value 0 if some instantiation takes the value 0; otherwise it takes the value i. Dually, ∃xA takes the value 1 if some instantiation with a constant kd takes the value 1; it takes the value 0 if all instantiations take the value 0; otherwise it takes the value i” (459). (21.4.2) As there will be no variance in the 3-valued logics we will deal with here among their D, V , and f values, we can simply regard their semantic structures as being “of the form ⟨D, v⟩, where D is the domain of quantification, and v assigns a denotation to each constant and predicate” (459). (21.4.3) As we will not in this chapter examine the tableau systems for these logics, we need to argue directly for the validity of inferences. (21.4.4) Priest next gives an example proof argument using reductio. (21.4.5) Priest next makes an example  proof argument by contraposition. (21.4.6) Priest next shows how to show invalid inferences by constructing counter-models by trial and error.



21.5

Their Free Versions

 

(21.5.1) Quantified many-valued logics still have the problematic inferences Pa ⊨ ∃xPx and ∀xPx Pa, and they can be solved using free logics. (21.5.2) Our quantified 3-valued logics are structured in the following way: “We take the language to contain an existence predicate, ℭ. An interpretation is a triple ⟨D, E, v⟩. D is the domain of all objects, and E D contains those that are thought of as existent. For every constant, c, v(c) ∈ D. For every n-place predicate, P, v(P) is a function such that if d1, . . . , dn D, v(P)(d1, . . . , dn) ∈ V. v(ℭ) is such that: v(ℭ)(d) ∈ D iff d E . Truth conditions are as in the non-free case, except that for the quantifiers v(∀xA) = Min({Ax(kd): d E}) (not D), and v(∃xA) = Max({Ax(kd): d E})” (461-462). (21.5.3) Using these semantics, counter-models can be constructed for the above problematic inferences. (21.5.4) In our free version of many-valued logics, we establish validity and invalidity in the same way as the non-free versions. When D = E, then “anything valid in any many-valued free logic is valid in the corresponding non-free logic.” “Conversely, suppose that the inference with premises Σ and conclusion A is valid in one of our 3-valued logics. Let C be the set of constants that occur in A and all members of Σ, and let Π = {ℭc: cC} ∪ {∃xx}. (The quantified sentence is redundant if C ≠ φ.) Then Π∪Σ ⊨ A in the corresponding free logic (where quantifiers are inner)” (462).



21.6

Existence and Quantification

 

(21.6.1) We can add inner and outer quantifiers to our quantified 3-valued logics. Outer quantifiers behave as normal, but inner quantifiers are  problematic when existential statements take the value i, and so inner quantifiers are primitive. (21.6.2) We now wonder if it makes sense for the existential predicate to have  non-classical values. (21.6.3) According to a certain view, we can think of existence statements of the form ℭa taking the value i under the sense of neither true nor false. (21.6.4) One argument for truth-valueless existence statements could be that non-denoting ones are valueless. “But the claim about non-denotation is not very plausible as far as the existence predicate goes. Supposing that the name ‘Sherlock Holmes’ does not denote anything, it would seem that ‘Sherlock Holmes exists’ is false, not truth-valueless” (464). (21.6.5) Another possibility is to say that existence statements can be neither true nor false when they state the existence of something bound up with a future contingency. So, we might say, “‘The first Pope of the 25th century will exist (but does not yet)’ or ‘Hilary will exist’ – where ‘Hilary’ rigidly designates the first Pope of the 25th century – is neither true nor false. But this seems wrong. If there is such a Pope, this is true” (464). (21.6.6) There is a stronger argument for truth-valueless existence statements, namely, ones that call for verificationism. So if “one can verify neither ‘a exists’ nor its negation, for some suitable a, then this statement is neither true nor false. Thus, for example, ‘The author of the Dao De Ching in fact existed’, or ‘Laozi in fact existed’ might be of this kind” (464). (21.6.7) Another way that we can have valueless existence statements would be borderline ranges of vague predicates, as for example during the gradual process of death where during a certain period some but not all vital bodily functions have ceased and thus when there is “a grey area where it is vague as to whether or not someone exists”. (21.6.8) We can also think of borderline existence cases as involving the value i with the sense of both true and false. For, “What intuition tells us, after all, is that the statement in question seems to be as true as it is false, as false as it is true; and, as far as that goes, the symmetric positions, both and neither, would seem to be as good as each other. Hence, borderline cases of existence might deliver existence statements that are both true and false” (464). (21.6.9) There are existence statements involving paradoxical self-reference that can be considered both true and false. Priest gives the example of Berry’s paradox. “Consider all those (whole) numbers that can be specified in English by a (context-independent) description with less than, say, 100 words. There is a finite number of these, so there are many numbers that cannot be so specified. There must therefore be a least. But there cannot be such a number, since if it did exist it would be specified by the description ‘the least (whole) number that cannot be specified in English by a description with less than 100 words’. The least whole number that cannot be specified in English by a description with less than 100 words both does and does not, therefore, exist” (465).



21.7

Neutral Free Logics

 

(21.7.1) We will now examine neutral free logics, where applying a predicate to a non-existent object always results in the semantic value neither true nor false (i). (21.7.2) Free logics are rendered neutral free logics by the addition of the neutrality constraint: if, for some 1 ≤ jn, dj E, then v(P)(d1, . . . , dn) = i. (In other words, formulas that predicate at least one non-existent object will be valued i, here understood as neither true nor false.) (21.7.3) Neutral free logics can alternatively be defined by using only a domain E of existents and by using the v denotation function for names as a partial function that leaves some values undefined, and so: 

if v(a1) = d1, . . . , v(an) = dn then v(Pa1 . . . an) = v(P)(d1 . . . dn)

if any of v(a1), …, v(an) is undefined, v(Pa1 . . . an) = i

(466)

(21.7.4) We can use this strategy also to give an alternative definition for negative free logics: “The denotation function for names is taken to be partial, and the truth conditions of atomic sentences are given as [in the section above], replacing ‘= i’ with ‘≠ 1’ ” (466). So (I presume, perhaps incorrectly):

if v(a1) = d1, . . . , v(an) = dn then v(Pa1 . . . an) = v(P)(d1 . . . dn)

if any of v(a1), …, v(an) is undefined, v(Pa1 . . . an) 1

(21.7.5) “The Neutrality Constraint gives rise to valid inferences that are not valid in a positive free logic. For example […], Pa1 . . . an ⊨ ℭa1 ∧ . . . ∧ ℭan and ¬ Pa1 . . . an ⊨ ℭa1 ∧ . . . ∧ ℭan. Negative free logics make the first of these valid, but not the second” (466). (21.7.6) In neutral free logics, we would say that statements with non-existent objects like “The greatest prime number is even” and “The King of France is bald” are valued i or neither true nor false. But we cannot say that all statements with non-existent objects are neither true nor false. “For it would seem that ‘The greatest prime number exists’ and ‘The King of France exists’ are both false, not neither true nor false” (466). But, that prevents there from being an obvious formal standard for determining which statements are exceptions. And so, by making one arbitrary exception for existence statements of non-existent objects, what stops us from making other exceptions, like saying that certain statements regarding non-existents are true, like “Homer worshipped Zeus” and “I am thinking about Sherlock Holmes”? (21.7.7) “Hence, though some sentences with non-denoting terms may be neither true nor false, not all would seem to be; the most appropriate free logic, even in a many-valued context, would appear to be a positive one” (467).

 

 

21.8

Identity

 

(21.8.1) We define identity in our many-valued quantified logics as:

(=)(d1, d2) ∈ D iff d1 = d2

(21.8.2) Under this definition of identity, the following inferences are valid: ⊨ a=a and a=b, PaPb. (21.8.3) Under this definition, the following inferences are also valid: a=b b=a and a=b, b=c ⊨ a=c. (In other words, identity is reflexive (see above), symmetric, and transitive.) It is also substitutable: a =b, Ax(a) ⊨ Ax(b). This holds even when identity is valued i. (21.8.4) “If we are in a logic where i is thought of as neither true nor false, and we enforce the neutrality constraint, then the truth conditions for identity become: if v(a) ∈ E and v(b) ∈ E then v(=)(a, b) ∈ D iff v(a) = v(b) ; if v(a) ∉ E or v(b) ∉ E then v(=)(a, b) = i (which makes sense provided that i D). Or, if we dispense with the outer domain, and take the denotation function to be a partial function: if v(a) and v(b) are defined then v(=)(a, b) ∈ D iff v(a) = v(b) ; if either v(a) or v(b) is not defined then v(=)(a, b) = i ”(467). (21.8.5) But, if in our logic i is neither true nor false and we also enforce the neutrality constraint, then ⊨ a=a is no longer valid (for, if a is non-existent, then a=a is i, and thus not a designated value). However, a=b, PaPb and more generally, a=b, Ax(a) ⊨ Ax(b) are valid. (21.8.6) Lastly, Priest notes that “given the neutrality constraint, a=b ⊨ ℭa ∧ ℭb and ℭa a=a” (467).



21.9

Non-classical Identity

 

(21.9.1) We now wonder, “is plausible to suppose that identity statements may take non-classical values, that is, values other than 0 and 1”? (21.9.2) We might think that identity statements involving the following circumstances could take non-classical values: non-denoting terms, future contingents, verificationism, vague predicates, and paradoxes of self-reference. (21.9.3) Priest will focus here on circumstances involving vagueness with regard to identity statements. “Suppose that I have two motorbikes, a and b. Suppose that I dismantle a and, over a period of time, replace each part of b with the corresponding part of a. At the start, the machine is b; at the end, it is a. Let us call the object somewhere in the middle of the transition c. Is it true that c = a (or c = b)? It is not clear; we would seem to be in a borderline situation, so the identity predicate can be a vague one. And if one takes vague predicates to have a non-classical value (both true and false or neither true nor false) when applied to borderline cases, then there are identity statements that take such values” (468). (21.9.4) Garth Evans argues against the possibility of the borderline or vague identity circumstanced being assigned non-classical values. We first say that we will call an identity statement “indeterminate” when its truth-value is i. We next suppose that we have such an indeterminate identity statement a = b. But, since it is determinately true – it is 1 rather than i – that a = a, we can infer that a and b have different properties (for, we cannot say that a = b, because this is indeterminately true and is not 1. And, on account of the indiscernibility of identicals, because a = a, that means a has certain properties which allow it to identify with itself by means of indiscernibility. So since it has properties but since they cannot be the same as b,) we may infer that ab. “Thus, the identity is not indeterminate: it is false. There are therefore no indeterminate identities” (468). (21.9.4) Garth Evans argues against the possibility of the borderline or vague identity circumstanced being assigned non-classical values. (We first say that we will call an identity statement “indeterminate” when its truth-value is i. We next suppose that we have such an indeterminate identity statement a = b. But, since it is determinately true that a = a  –  it is 1 rather than i –  we can infer that a and b have different properties; for, we cannot say that a = b, because this is indeterminately true and is not 1. And, on account of the indiscernibility of identicals, because a = a, that means a has certain properties which allow it to identify with itself by means of indiscernibility. So since it has properties but since they cannot be the same as b, we may infer that ab. Let me quote Priest so to have it exactly right:)  “Let us say that an identity is indeterminate if the statement expressing it takes the value i. The argument goes as follows. Suppose that it is indeterminate whether a = b. It is determinately true that a = a, so a and b have different properties, and thus, ab. Thus, the identity is not indeterminate: it is false. There are therefore no indeterminate identities” (468). (21.9.5) The inference of this argument against non-classical identity is based on a contraposed form of the substitutivity of identicals. (The best I have in my own words right now, to be revised later, is: We assume that you can substitute determinately identical terms one for the other in predications, and if by making such a substitution you generate a contradiction, then the terms are not determinately identical (although they can still be indeterminately identical). We next affirm two things that we know to be true, namely, that a is indeterminately identical to b, and that a is not indeterminately equal to a. Here, on account of the substitutivity of identicals, we need to conclude that a is not determinately identical to b. For, were it so that they were determinately identical, then we would have the following contradiction, namely, that both ‘a is indeterminately equal to b’ and that ‘a is not indeterminately equal to b.’ Thus given this contradiction that ((determinately)) ‘a = b’ would cause on account of the substitutivity of identicals, we must conclude instead that a ≠ b.) (Now the correct account, all in Priest’s words:) “To analyse this argument, let us suppose that we are using one of our 3-valued logics; let us write ∇ for ‘it is indeterminate that’, and suppose that: v(∇A) ∈ D if v(A) = i ; v(∇A) = 0 otherwise . Then the argument is simply:

Suppose that ∇a = b    (1)

Then since ¬∇a = a     (2)

It follows that a ≠ b    (3)

The inference is a contraposed form of SI; SI itself we know to be valid” (468). (21.9.6) Evans’ argument against indeterminate identity must have something wrong about how it proceeds, because the machinery of 3-valued logics do indeed allow for identity statements to take the value i. (21.9.7) Evans’ argument does not hold for gap 3-valued logics, because when a and b are distinct objects, the premises are true but the conclusion is not true: “Consider the K3 or Ł3 evaluation in which: v(=)(d, e) = 1 if v(d) = v(e) ; v(=)(d, e) = i if v(d) ≠ v(e) . Let a and b denote distinct objects. Then a = b has the value i, so ∇a = b has the value 1. a = a has the value 1, so ¬∇a = a has the value 1. But a = b and so its negation, has the value i” (468). (21.9.8) The inference under the above semantic interpretation for identity is still valid in glut logics, but under an alternate interpretation (namely, that identity statements about things that are the same are both true and false), the argument remains valid, yet it concludes that identity statements take the value i, and thus non-classical identity can hold in glut 3-valued logics: “In LP and RM3, the inference is valid, even without the second premise. Suppose that the value of ∇a = b is designated. Then the value of a = b is i. So the value of the conclusion, a b, is also designated. But this does not rule out indeterminate identity statements. Consider an LP or RM3 interpretation in which:  v(=)(d, e) = i if v(d) = v(e) ; v(=)(d, e) = 0 if v(d) ≠ v(e) . Let a and b denote the same object, then (1), (2) and (3) are all designated. Yet a = b has the value i” (469).

 






 
Priest, Graham. 2008 [2001]. An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.


.

No comments:

Post a Comment