24 May 2018

Priest (4.9) An Introduction to Non-Classical Logic, ‘Lewis’ Argument for Explosion,’ summary

 

by Corry Shores

 

[Search Blog Here. Index-tags are found on the bottom of the left column.]

 

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, Introduction to Non-Classical Logic, entry directory]

 

[The following is summary of Priest’s text, which is already written with maximum efficiency. Bracketed commentary and boldface are my own, unless otherwise noted. I do not have specialized training in this field, so please trust the original text over my summarization. I apologize for my typos and other unfortunate mistakes, because I have not finished proofreading, and I also have not finished learning all the basics of these logics.]

 

 

 

 

Summary of

 

Graham Priest

 

An Introduction to Non-Classical Logic: From If to Is

 

Part I:

Propositional Logic

 

4.

Non-Normal Modal Logics; Strict Conditionals

 

4.9

Lewis’ Argument for Explosion

 

 

 

 

Brief summary:

(4.9.1) Strict conditionals do not require relevance, as we see for example with: ⊨ (A ∧ ¬A) ⥽ B. So we might object to them on this basis. (4.9.2) C.I. Lewis argues that (A ∧ ¬A) ⥽ B is intuitively valid, because from A ∧ ¬A it is intuitively valid to infer A and ¬A; from ¬A it is intuitively valid to infer ¬A B, and from A and ¬A B it is intuitively valid, by disjunctive syllogism, to derive B. [Now, if each step has a connection on the basis of its intuitive validity, that means the final conclusion B should have a connection, by extension, to A ∧ ¬A on the basis of the intuitively valid steps leading from the premise to the final conclusion. So despite objections to the contrary, there is a connection between the antecedent and consequent in (A ∧ ¬A) ⥽ B, according to Lewis. (4.9.3) C.I. Lewis also formulates an argument for the connection between antecedent and conclusion for A ⥽ (B ∨ ¬B), but this argument is a bit less convincing than the one for (A ∧ ¬A) ⥽ B.

 

 

 

 

 

 

 

 

Contents

 

4.9.1

[Strict Conditionals as Lacking Relevance]

 

4.9.2

[C.I. Lewis’ Argument for the Connection between Antecedent and Consequent in (A ∧ ¬A) ⥽ B by Means of Disjunctive Syllogism]

 

4.9.3

[C.I. Lewis’ Argument for the Connection between Antecedent and Consequent in A ⥽ (B ∨ ¬B)]

 

 

 

 

 

 

Summary

 

4.9.1

[Strict Conditionals as Lacking Relevance]

 

[Strict conditionals do not require relevance, as we see for example with: ⊨ (A ∧ ¬A) ⥽ B. So we might object to them on this basis.]

 

[Recall from section 4.5.2 and section 4.5.3 that the strict conditional AB is defined as □(AB).  In previous sections – see for example section 4.6 and section 4.8 – Priest has considered objections for the strict conditional ⥽ as providing a correct account of the conditional. Priest will now consider a final objection to the this claim about the correctness of the strict conditional. He notes that we have the intuition that this definition is inadequate, because we expect in a conditional that there is some kind of connection between the antecedent and the consequent (for otherwise, what is the sense of the conditionality of their relation?). But strict conditionals do not require any such connection. For example, there is no connection between A ∧ ¬A and B, (even though, as we saw in section 4.6.3: ⊨ (A ∧ ¬A) ⥽ B.)]

Let us end by considering a final objection to ⥽ as providing a correct account of the conditional. It is natural to object that this account cannot be correct, since a conditional requires some kind of connection between antecedent and consequent; yet a strict conditional requires no such connection. There is no connection in general, for example, between A ∧ ¬A and B.

(76)

[contents]

 

 

 

 

 

4.9.2

[C.I. Lewis’ Argument for the Connection between Antecedent and Consequent in (A ∧ ¬A) ⥽ B by Means of Disjunctive Syllogism]

 

[C.I. Lewis argues that (A ∧ ¬A) ⥽ B is intuitively valid, because from A ∧ ¬A it is intuitively valid to infer A and ¬A; from ¬A it is intuitively valid to infer ¬A B, and from A and ¬A B it is intuitively valid, by disjunctive syllogism, to derive B. (Now, if each step has a connection on the basis of its intuitive validity, that means the final conclusion B should have a connection, by extension, to A ∧ ¬A on the basis of the intuitively valid steps leading from the premise to the final conclusion. So despite objections to the contrary, there is a connection between the antecedent and consequent in (A ∧ ¬A) ⥽ B, according to Lewis.)]

 

[Despite what we said about relevance above in section 4.9.1, C.I. Lewis does see a connection in the strict conditional even in explosive formulas like ⊨ (A ∧ ¬A) ⥽ B. (On explosion and the strict conditional, see section 4.8). Only, the connection here is one obtained by a series of inferences, each of which is presumably intuitively valid. (So if each inference is intuitively valid, then they have a logical connection. And so ultimately the explosive inference is intuitively valid). We begin with a premise that is a contradiction: A ∧ ¬A. We then infer the conjects from this conjunction,  ¬A and A. From ¬A we infer the disjunction ¬A B, which with A and by disjunctive syllogism, we infer B. (The idea might be the following, but I am just guessing here. By modus ponens, from A, A B we can infer B. And, AB is equivalent ¬A B. And as we see, by disjunctive syllogism from A, ¬A B we can infer B. Furthermore, maybe another idea here is that when there are premises validly making some other formula true, then you can make the premises be the antecedents and the conclusion the consequent in another formula that will be valid, but I am guessing. So because A B is equivalent to ¬A B, and because the inference from A ∧ ¬A to B is shown to be valid using disjunctive syllogism on premises validly derived from A ∧ ¬A, that means (A ∧ ¬A) ⥽ B should be intuitively valid. Again, these are guesses. See the quotation below.]

C.I. Lewis, who did accept as an adequate account of the conditional, thought that there was a connection, at least in this case. The connection is shown in the following argument:

xxxxxxxxxxxxxA∧¬A

xxxxxxxxxxxx______

xxxxxA∧¬Axxxxx¬A

xxxxx____xxxxx___

xxxxxxxAxxxxx¬A∨B

xxxxx____________

xxxxxxxxxxxB

Premises are above lines; conclusions are below. The only ultimate premise is A∧¬A; the only ultimate conclusion is B. The inferences that the argument uses are: inferring a conjunct from a conjunction; inferring a disjunction from a disjunct; and the disjunctive syllogism: A, ¬A B B. Of course, all these are valid in the modal logics we have looked at. If contradictions do not entail everything, then one of these must be wrong. We will return to this point in a later chapter.

(76)

[contents]

 

 

 

 

4.9.3

[C.I. Lewis’ Argument for the Connection between Antecedent and Consequent in A ⥽ (B ∨ ¬B)]

 

[C.I. Lewis also formulates an argument for the connection between antecedent and conclusion for A ⥽ (B ∨ ¬B), but this argument is a bit less convincing than the one for (A ∧ ¬A) ⥽ B.]

 

[Priest then notes that “Lewis also argued that there is a connection in the case of the conditional A ⥽ (B ∨ ¬B) as well,” using the following argument. We begin with A. From this we infer (AB) ∨ (A ∧ ¬B) (I am not exactly sure how, but maybe the reasoning is something like the following. Either B or ¬B holds, on account of excluded middle. Since we have affirmed A, then either (AB) or (A ∧ ¬B) holds.) From this we infer A ∧ (B ∨ ¬B) (I am not sure how again, but it seems like we extract the A as being the common affirmed formula in both, leaving (B ∨ ¬B).) And from this we infer (B ∨ ¬B) by pulling it out as one of the conjuncts. So by beginning with A, we can validly infer (B ∨ ¬B), and thus A ⥽ (B ∨ ¬B).) Priest says this argument is less convincing than the prior one, because “the first step seems evidently to smuggle in the conclusion” (77). (But I am not sure how that works other than the fact that the (B ∨ ¬B) that we want to derive is built into (AB) ∨ (A ∧ ¬B) by a sort of distribution.) Please see the quotation below, as I do not know the precise reasoning for each step.]

Lewis also argued that there is a connection in the case of the conditional A ⥽ (B ∨ ¬B) as well. The connection is provided by the | following argument:

xxxxxxxxxxxxxA

xxxxx_________________

xxxxx(A ∧ B) ∨ (A ∧ ¬B)

xxxxx_________________

xxxxxxxxA ∧ (B ∨ ¬B)

xxxxxxx______________

xxxxxxxxx(B ∨ ¬B)

This argument is less convincing than that of 4.9.2, however, since the first step seems evidently to smuggle in the conclusion.

(76-77)

[contents]

 

 

 

 

 

From:

 

Priest, Graham. 2008 [2001]. An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.

 

 

.

 

23 May 2018

Priest (1.10) An Introduction to Non-Classical Logic, ‘Arguments for ⊃,’ summary

 

by Corry Shores

 

[Search Blog Here. Index-tags are found on the bottom of the left column.]

 

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, Introduction to Non-Classical Logic, entry directory]

 

[The following is summary of Priest’s text, which is already written with maximum efficiency. Bracketed commentary and boldface are my own, unless otherwise noted. I do not have specialized training in this field, so please trust the original text over my summarization. I apologize for my typos and other unfortunate mistakes, because I have not finished proofreading, and I also have not finished learning all the basics of these logics.]

 

 

 

 

Summary of

 

Graham Priest

 

An Introduction to Non-Classical Logic: From If to Is

 

Part I:

Propositional Logic

 

1.

Classical Logic and the Material Conditional

 

1.10

Arguments for ⊃

 

 

 

Brief summary:

(1.10.1) Even though the material conditional, ⊃, is not properly suited to describe the functioning of the English conditional, it had come to be regarded as such on account of there only being standard truth-table semantics until the 1960s, and the only plausible candidate in that semantics for “if” formations would be the material conditional. (1.10.2) However, there are notable arguments that the material conditional can be used to understand the English conditional, and they construe that relation in the following way: “‘If A then B’ is true iff ‘A B’ is true.” (1.10.3)  ‘If A then B’ is true then ¬AB is true. (1.10.4) Suppose A and ¬A B are true. By disjunctive syllogism: A, ¬A B B. This fulfills (*), when we take ¬A B as the C term. [Now, since A, ¬A B B fulfills the definition of the English conditional, and since A, ¬A B B also gives us the (modus ponens) logic of the conditional (given the equivalence of A B and ¬AB), that means the logic of the English conditional is adequately expressed by A B.] (1.10.5) What later proves important in the above argumentation is the use of disjunctive syllogism. (1.10.6) What later proves important in the above argumentation is the use of disjunctive syllogism.

 

 

 

 

Contents

 

1.10.1

[The Prevalence of the Mistaken Identification of the English Conditional with the Material Conditional as Resulting Historically from Limitations in Semantics]

 

1.10.2

[Defining the English Conditional Using the Material Condition as “‘If A then B’ is true iff ‘A B’ is true”]

 

1.10.3

AB from ‘If A then B’]

 

1.10.4

[Disjunctive Syllogism and the English Conditional]

 

1.10.5

[Noting Disjunctive Syllogism in the Argumentation]

 

1.10.6

[Noting Disjunctive Syllogism in the Argumentation]

 

 

 

 

 

 

Summary

 

1.10.1

[The Prevalence of the Mistaken Identification of the English Conditional with the Material Conditional as Resulting Historically from Limitations in Semantics]

 

[Even though the material conditional, ⊃, is not properly suited to describe the functioning of the English conditional, it had come to be regarded as such on account of there only being standard truth-table semantics until the 1960s, and the only plausible candidate in that semantics for “if” formations would be the material conditional.]

 

[Recall from section 1.8 and section 1.9 that there are a number of ways that the material conditional does not function exactly like the English conditional ‘if’. Priest now explains why many thought it could be adequate despite such problems. He says that this resulted from a historical factor, namely, that standard truth-table semantics were the only semantics we had until the 1960s, and, of the options they provided, “⊃ is the only truth function that looks an even remotely plausible candidate for ‘if’” (15).]

The claim that the English conditional (or even the indicative conditional) is material is therefore hard to sustain. In the light of this it is worth asking why anyone ever thought this. At least in the modern period, a large part of the answer is that, until the 1960s, standard truth-table semantics were the only ones that there were, and ⊃ is the only truth function that looks an even remotely plausible candidate for ‘if’.

(15)

[contents]

 

 

 

 

1.10.2

[Defining the English Conditional Using the Material Condition as “‘If A then B’ is true iff ‘A B’ is true”]

 

[However, there are notable arguments that the material conditional can be used to understand the English conditional, and they construe that relation in the following way: “‘If A then B’ is true iff ‘A B’ is true.”]

 

[I might be mistaken about the following, so please consult the quotation below. The idea might be now that we will examine an argument that in fact the material conditional can be used to understand the English conditional. Here it is something like saying: “‘If A then B’ is true iff ‘A B’ is true.”]

Some arguments have been offered, however. Here is one, to the effect that ‘If A then B’ is true iff ‘A B’ is true.

(15)

[contents]

 

 

 

 

1.10.3

AB from ‘If A then B’]

 

[‘If A then B’ is true then ¬AB is true.]

 

[Priest next will show that if ‘‘If A then B’ is true’ then ¬AB is true. He does this with the following reasoning. First we suppose ‘‘If A then B’ is true, and then we consider two further possibilities. He notes that either ¬A or A is true. Take the first possibility, that ¬A is true. That means ¬AB is true. (I am not entirely sure why, but it might be something like disjunction introduction. See Agler’s Symbolic Logic section 5.3.8. But I am guessing.) Or take the second possibility, that A is true. That means, by modus ponens, B is true. For, we are affirming the antecedent and thus thereby the consequent. So again, we have that ¬AB is true. (I am again guessing it is something like disjunction introduction.) Thus in either case, ¬AB is true.]

First, suppose that ‘If A then B’ is true. Either ¬A is true or A is. In this first case, ¬AB is true. In the second case, B is true by modus ponens. Hence, again, ¬AB is true. Thus, in either case, ¬AB is true.

(15)

[contents]

 

 

 

 

1.10.4

[C and A Entailing B for “If A then B”]

 

[“(*) ‘If A then B’ is true if there is some true statement, C, such that from C and A together we can deduce B” (15).]

 

[I do not follow the next part, so please see the quotation below. First Priest speaks of the “converse argument”. I do not know what that is. Is it that from ¬AB we can derive ‘If A then B’? I do not know. And even if it were, I do not understand Priest’s point that the converse argument appeals to the claim that “(*) ‘If A then B’ is true if there is some true statement, C, such that from C and A together we can deduce B.” In fact, I do not know what it means to “appeal to a claim.” But the example makes sense: “Thus, we agree that the conditional ‘If Oswald didn’t kill Kennedy, someone else did’ is true because we can deduce that someone other than Oswald killed Kennedy from the fact that Kennedy was murdered and Oswald did not do it.” Yet I am not sure if and how to relate this to ¬AB. Perhaps I could follow this if I had access to the Faris text that Priest later cites for this issue (‘Interderivability of “⊃” and “If” ’), but currently I do not have it. I will continue this poor explanation in the next section.)]

The converse argument appeals to the following plausible claim:

(*) ‘If A then B’ is true if there is some true statement, C, such that from C and A together we can deduce B.

Thus, we agree that the conditional ‘If Oswald didn’t kill Kennedy, someone else did’ is true because we can deduce that someone other than Oswald killed Kennedy from the fact that Kennedy was murdered and Oswald did not do it.

(15)

[contents]

 

 

 

 

 

1.10.5

[Disjunctive Syllogism and the English Conditional]

 

[Suppose A and ¬A B are true. By disjunctive syllogism: A, ¬A B B. This fulfills (*), when we take ¬A B as the C term. (Now, since A, ¬A B B fulfills the definition of the English conditional, and since A, ¬A B B also gives us the (modus ponens) logic of the conditional (given the equivalence of A B and ¬AB), that means the logic of the English conditional is adequately expressed by A B.)]

 

[Priest now says that “suppose that ¬A B is true. Then from this and A we can deduce B, by the disjunctive syllogism: A, ¬A B B. Hence, by (*), ‘If A then B’ is true” (16). Maybe this corresponds to (*) when we take ¬A B to be the C term in (*), but I am guessing. My problem still is I do not know how to put all the ideas together from all the sections here, even though the ideas in each one by themselves make sense. At least let us note the following for now. In section 1.7.1 we said that A B is equivalent to ¬AB. So my following explanation is wrong, but I cannot think of anything else at the moment. Maybe we are to think of how A, ¬A B B works by disjunctive syllogism and A, A BB by modus ponens. Both cases also fulfill (*), which confirms “if A then B”. I am incorrectly guessing that the overall idea is that (*) gives us our definition for the English conditional ‘if A then B’, and the inference which gives us the basic “logic” of the conditional (modus ponens or disjunctive syllogism formulated equivalently) fulfills the definition in (*). I am very sorry that at the moment I cannot put all these sections together coherently, so please read this whole section 1.10 for yourself.]

Now, suppose that ¬A B is true. Then from this and A we can deduce B, by the disjunctive syllogism: A, ¬A B B. Hence, by (*), ‘If A then B’ is true.

(16)

[contents]

 

 

 

1.10.6

[Noting Disjunctive Syllogism in the Argumentation]

 

[What later proves important in the above argumentation is the use of disjunctive syllogism.]

 

[Priest notes finally that:]

We will come back to this argument in a later chapter. For now, just note the fact that it uses the disjunctive syllogism.

(16)

[contents]

 

 

 

 

 

 

From:

 

Priest, Graham. 2008 [2001]. An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.

 

 

.

22 May 2018

Priest (4.8) An Introduction to Non-Classical Logic, ‘The Explosion of Contradictions,’ summary

 

by Corry Shores

 

[Search Blog Here. Index-tags are found on the bottom of the left column.]

 

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, Introduction to Non-Classical Logic, entry directory]

 

[The following is summary of Priest’s text, which is already written with maximum efficiency. Bracketed commentary and boldface are my own, unless otherwise noted. I do not have specialized training in this field, so please trust the original text over my summarization. I apologize for my typos and other unfortunate mistakes, because I have not finished proofreading, and I also have not finished learning all the basics of these logics.]

 

 

 

 

Summary of

 

Graham Priest

 

An Introduction to Non-Classical Logic: From If to Is

 

Part I:

Propositional Logic

 

4.

Non-Normal Modal Logics; Strict Conditionals

 

4.8

The Explosion of Contradictions

 

 

 

 

Brief summary:

(4.8.1) One of the paradoxes of the strict conditional is: ⊨ (A ∧ ¬A) ⥽ B. By modus ponens we derive: (A∧¬A)⊨B. In other words, contradictions entail everything (any arbitrary formula whatsoever). But this is counter-intuitive, and there are counter-examples that we will consider. (4.8.2) The first counter-example: Bohr knowingly combined inconsistent assumptions in his model of the atom, but on that account the model functioned well. However, explosion does not hold here, because we cannot on the basis of the contradiction infer everything else, like electronic orbits being rectangles. (4.8.3) The second counter-example: we can have inconsistent laws without their contradiction entailing everything. (4.8.4) The third counter-example: there are perceptual illusions that give us inconsistent impressions without giving us all impressions. For example, the waterfall illusion gives us the impression of something moving and not moving, but it does not thereby also give us every other impression whatsoever. The fourth counter-example: there can be fictional situations where contradictions hold but that thereby not all things hold as well.

 

 

 

 

 

 

 

Contents

 

4.8.1

[The Strict Conditional Involves the Explosion of Contradictions]

 

4.8.2

[Counter-Example 1: The Bohr Model’s Contradictory Assumptions as Non-Explosive]

 

4.8.3

[Counter-Example 2: Inconsistent Legislation]

 

4.8.4

[Counter-Example 3: Perceptual Illusions. Counter-Example 4: Fictional Situations]

 

 

 

 

 

 

 

 

Summary

 

4.8.1

[The Strict Conditional Involves the Explosion of Contradictions]

 

[One of the paradoxes of the strict conditional is: ⊨ (A ∧ ¬A) ⥽ B. By modus ponens we derive: (A∧¬A)⊨B. In other words, contradictions entail everything (any arbitrary formula whatsoever). But this is counter-intuitive, and there are counter-examples that we will consider.]

 

[Let us first recall some notions regarding the strict conditional. In section 4.5.2 and section 4.5.3 we learned that the strict conditional is defined as “□(AB),” and it is symbolized as AB. In section 4.6.2 and section 4.6.3, we learned that modal systems that can handle conditionality should be systems where modus ponens holds: A, ABB. (I did not know why exactly this is necessary, but I guessed it was for the following reason. Suppose modus ponens does not hold. That would mean by affirming the antecedent, we could not obtain the consequent. But were that the case, then we have lost a basic intuition we have about conditionality, namely, that the consequent will follow necessarily from the antecedent.) We learned in section 4.6.2 that for modus ponens to hold in a modal system, it needs the ρ-constraint (reflexivity). (Recall it from section 3.2.3: “ρ (rho), reflexivity: for all w, wRw” p.36.) But we then learned in section 4.6.3 that no matter how many other constraints we add to ρ, we will always obtain the paradoxes of strict implication, with one being: ‘⊨ (A ∧ ¬A) ⥽ B’. Now in our current section, Priest says that by modus ponens, from ⊨ (A ∧ ¬A) ⥽ B we can derive (A∧¬A)⊨B. (I do not know exactly how that works, however. I guess the idea is that if we establish the conditional, and if we have modus ponens, then that means simply from the antecedent being affirmed we can infer the consequent as a semantic consequence. The important philosophical point here is that) the strict conditional in any modal system that can handle conditionality leads us to being able to derive any arbitrary formula whatsoever from a contradiction. As Priest puts it: “Contradictions would entail everything.” But this is counter-intuitive. Priest will now give three counter-examples of situations or theories that are inconsistent but also where we should not be able thereby to infer that everything whatsoever holds.]

The toughest objections to a strict conditional, at least as an account of the indicative conditional, come from the fact that ⊨(A∧¬A)⥽B. If this were the case, then, by modus ponens, we would have (A∧¬A)⊨B. Contradictions would entail everything. Not only is this highly counterintuitive, | there would seem to be definite counter-examples to it. There appear to be a number of situations or theories which are inconsistent, yet in which it is manifestly incorrect to infer that everything holds. Here are three very different examples.

(74-75)

[contents]

 

 

 

 

 

4.8.2

[Counter-Example 1: The Bohr Model’s Contradictory Assumptions as Non-Explosive]

 

[The first counter-example: Bohr knowingly combined inconsistent assumptions in his model of the atom, but on that account the model functioned well. However, explosion does not hold here, because we cannot on the basis of the contradiction infer everything else, like electronic orbits being rectangles.]

 

[I do not know much about the first example, so please see the quotation below. The basic idea is that Bohr knowingly combined two inconsistent assumptions in his model of the atom, namely, he assumes “the standard Maxwell electromagnetic equations” but also “that energy could come only in discrete packets (quanta).” Yet, despite its obvious inconsistency, both assumptions were needed for the model to work and “many of its observable predictions were spectacularly verified.” Priest’s philosophical point here is that on the basis of this contradiction, we cannot infer everything else. “Bohr did not infer, for example, that electronic orbits are rectangles” (75).]

The first is a theory in the history of science: Bohr’s theory of the atom (the ‘solar system’ model). This was internally inconsistent. To determine the behaviour of the atom, Bohr assumed the standard Maxwell electromagnetic equations. But he also assumed that energy could come only in discrete packets (quanta). These two things are inconsistent (as Bohr knew); yet both were integrally required for the account to work. The account was therefore essentially inconsistent. Yet many of its observable predictions were spectacularly verified. It is clear though that not everything was taken to follow from the account. Bohr did not infer, for example, that electronic orbits are rectangles.

(75)

[contents]

 

 

 

 

4.8.3

[Counter-Example 2: Inconsistent Legislation]

 

[The second counter-example: we can have inconsistent laws without their contradiction entailing everything.]

 

[In Priest’s second counter-example, we have two laws that together function together non-problematically in most cases, but in a particular situation they come into contradiction. Priest then says that on the basis of this contradiction, “it would be stupid to infer from this that, for example, the traffic laws are consistent” (75). (I did not quite get how that works. Are we saying that we can consider our two inconsistent laws as presenting a structure like A∧¬A, and “the traffic laws are consistent” is some arbitrary B that we try to derive from it? At any rate, surely at least we might say that from this contradiction we cannot derive any other traffic law we want.)]

Another example: pieces of legislation are often inconsistent. To avoid irrelevant historical details, here is an hypothetical example. Suppose that an (absent-minded) state legislator passes the following traffic laws. At an unmarked junction, the priority regulations are:

(1) Any woman has priority over any man.

(2) Any older person has priority over any younger person.

(We may suppose that clause 2 was meant to resolve the case where two men or two women arrive together, but the legislator forgot to make it subordinate to clause 1.) The legislation will work perfectly happily in three out of four combinations of sex and age. But suppose that Ms X, of age 30, approaches the junction at the same time as Mr Y, of age 40. Ms X has priority (by 1), but has not got priority (by 2 and the meaning of ‘priority’). Hence, the situation is inconsistent. But, again, it would be stupid to infer from this that, for example, the traffic laws are consistent.

(75)

[contents]

 

 

 

 

4.8.4

[Counter-Example 3: Perceptual Illusions. Counter-Example 4: Fictional Situations]

 

[The third counter-example: there are perceptual illusions that give us inconsistent impressions without giving us all impressions. For example, the waterfall illusion gives us the impression of something moving and not moving, but it does not thereby also give us every other impression whatsoever. The fourth counter-example: there can be fictional situations where contradictions hold but that thereby not all things hold as well.]

 

[The third example is that there are perceptual illusions that can give us inconsistent impressions. For example, the waterfall illusion causes us to see something both in motion and not in motion. But thereby we do not perceive everything else, like for example that everything is red all over. The fourth example is that in fictional situations where there are contradictions, that does not entail that everything holds in that fictional situation. (For some reason the fourth one is placed in a  footnote, despite being an excellent and convincing counter-example.)]

Third example: it is possible to have visual illusions where things appear contradictory. For example, in the ‘waterfall effect’, one’s visual system is conditioned by constant motion of a certain kind, say a rotating spiral. If one then looks at a stationary situation, say a white wall, it appears to move in the opposite direction. But, a point in the visual field, | say at the top, does not appear to move, for example, to revolve around to the bottom. Thus, things appear to move without changing place: the perceived situation is inconsistent. But not everything perceivable holds in this situation. For example, it is not the case that the situation is red all over.5

(75-76)

5. A fourth kind of example is provided by certain fictional situations, in which contradictory states of affairs hold. This may well be the case without everything holding in the fictional situation.

(76)

[contents]

 

 

 

 

 

 

 

From:

 

Priest, Graham. 2008 [2001]. An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.

 

 

.

 

17 May 2018

Priest (3.6a) An Introduction to Non-Classical Logic, ‘The Tense Logic Kt’, summary

 

by Corry Shores

 

[Search Blog Here. Index-tags are found on the bottom of the left column.]

 

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, Introduction to Non-Classical Logic, entry directory]

 

[The following is summary of Priest’s text, which is already written with maximum efficiency. Bracketed commentary and boldface are my own, unless otherwise noted. I do not have specialized training in this field, so please trust the original text over my summarization. I apologize for my typos and other unfortunate mistakes, because I have not finished proofreading, and I also have not finished learning all the basics of these logics.]

 

 

 

 

Summary of

 

Graham Priest

 

An Introduction to Non-Classical Logic: From If to Is

 

Part I:

Propositional Logic

 

3.

Normal Modal Logics

 

3.6a

The Tense Logic Kt

 

 

 

 

Brief summary:

(3.6a.1) We will now examine tense logic. (3.6a.2) The semantics of tense logic are the same as modal logic, only with some modifications to reflect certain temporal senses. The notion of succession is modeled with the accessibility relation such that w1Rw2  has the intuitive sense: ‘w1 is earlier than w2’. “□A means something like ‘at all later times, A’, and ◊A as ‘at some later time, A’,” but “we will now write □ and ◊ as [F] and ⟨F⟩, respectively. (The F is for ‘future’)” (49). (3.6a.3) The tense logic operators for the past are [P] and ⟨P⟩, which correspond semantically to □ and ◊. (3.6a.4) We evaluate the tense operators in the following way:

vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1

vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1

vw([F]A) = 1 iff for all w′ such that wRw′, vw(A) = 1

vw(⟨FA) = 1 iff for some w′ such that wRw′, vw(A) = 1

(50, with the future operator formulations being my guesses.)

(3.6a.5) “If, in an interpretation, R may be any relation, we have the tense-logic analogue of the modal logic, K, usually written as Kt” (50). (3.6a.6) The tableaux rules for the tense operators is much like for necessity and possibility only we need to keep in mind the order of r formulations for the different tenses. Priest provides the following tableau rules for the tense operators.

 

Full Future

Development ([F]D)

[F]A,i

irj

A,j

(For all j)

 

Partial Future

Development (⟨F⟩D)

⟨F⟩A,i

irj

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Future

Development (¬[F]D)

¬[F]A,i

⟨F⟩¬A,i

 

Negated Partial Future

Development (¬⟨F⟩D)

¬⟨F⟩A,i

⟨F⟩¬A,i

 

Full Past

Development ([P]D)

[P]A,i

jri

A,j

 

(For all j)

 

Partial Past

Development (⟨P⟩D)

⟨P⟩A,i

jri

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Past

Development (¬[P]D)

¬[P]A,i

⟨P⟩¬A,i

 

Negated Partial Past Development (¬⟨P⟩D)

¬⟨P⟩A,i

⟨P⟩¬A,i

(50, with my added names and other data at the bottoms)

(3.6a.7) Priest then gives a tableau example. (3.6a.8) Priest then shows how to construct a counter-model in tense logic, using an example. (We use the same procedure given in section 2.4.7.) (3.6a.9) We can think of time going in reverse, from the future, moving backward through the past, by taking the converse R relation (yRx becomes xŘy) (and/or by converting all F’s to P’s and vice versa ).

 

 

 

 

 

Contents

 

3.6a.1

[Turning to Tense Logic]

 

3.6a.2

[Tense Logic Notation. Future Operators: [F] and ⟨F⟩.]

 

3.6a.3

[Past Operators: [P] and ⟨P⟩]

 

3.6a.4

[Tense Operator Semantic Evaluation]

 

3.6a.5

[Tense Logic as Kt]

 

3.6a.6

[Tense Logic Tableau Rules]

 

3.6a.7

[Tableau Example]

 

3.6a.8

[Counter-Models]

 

3.6a.9

[Mirror Images and Converse Temporal Relations]

 

 

 

 

 

 

 

 

 

Summary

 

3.6a.1

[Turning to Tense Logic]

 

[We will now examine tense logic.]

 

[In previous sections, we have been learning normal modal logics K. Now Priest will now look at tense logic.]

In the last two sections of this chapter, we will look at another interpretation of modal logics: tense logic.

(49)

[contents]

 

 

 

 

3.6a.2

[Tense Logic Notation. Future Operators: [F] and ⟨F⟩.]

 

[The semantics of tense logic are the same as modal logic, only with some modifications to reflect certain temporal senses. The notion of succession is modeled with the accessibility relation such that w1Rw2  has the intuitive sense: ‘w1 is earlier than w2’. “□A means something like ‘at all later times, A’, and ◊A as ‘at some later time, A’,” but “we will now write □ and ◊ as [F] and ⟨F⟩, respectively. (The F is for ‘future’)” (49). ]

 

[Tense logic uses the same semantics as for normal modal logic, only we give a different intuitive sense to the R accessibility relation for worlds. We now think of a world as being a world at some time, and the R relation as meaning that the first indicated world is at a time prior to the second one. With that in mind, the necessity operator would be like “at all later times” while the possibility operator means “at some later time”. We now write □ as [F] and ◊ as ⟨F⟩. (At this point I am a little confused. I am guessing that the notion of “at all later times” means that we are thinking that the future is a singular linear series, and we are saying that the proposition holds for all future moments. I would guess an example would be something like, at all later times, water is wet. And for, “at some later time,” maybe an example could be, at some later time it will be raining. I am a little uncertain, because I am wondering about how the future can also be seen as branching out along divergent possibilities, and thus to say “at all later times” would mean something like, regardless of which way things go, in all cases something will hold. For example, at all later times the sun is on the path to dying. But as far as I can tell, that is not the meaning, even though I still cannot say for sure yet.)]

The semantics of a tense logic are exactly the same as those for a normal modal logic. Intuitively, though, one thinks of the worlds of an interpretation as times (or maybe states of affairs at times), and the relation w1Rw2 as ‘w1 is earlier than w2’. Hence □A means something like ‘at all later times, A’, and ◊A as ‘at some later time, A’. For reasons that will become clear in a moment, we will now write □ and ◊ as [F] and ⟨F⟩, respectively. (The F is for ‘future’.)

(49)

[contents]

 

 

 

 

3.6a.3

[Past Operators: [P] and ⟨P⟩]

 

[The tense logic operators for the past are [P] and ⟨P⟩, which correspond semantically to □ and ◊.]

 

[In tense logic there are also operators for the past, [P] and ⟨P⟩, semantically corresponding to □ and ◊.]

What is novel about tense logic is that another pair of operators, [P] and ⟨P⟩, is added to the language. (The P is for ‘past’.)5 Their grammar is exactly the same as that for [F] and ⟨F⟩. So we can write things such as ⟨P⟩[F](p∧¬[P]q).

(49)

5. Traditionally, the operators ⟨F⟩, [F], ⟨P⟩ and [P], are written as F, G, P and H, respectively.

(49)

[contents]

 

 

 

 

 

3.6a.4

[Tense Operator Semantic Evaluation]

 

[We evaluate the tense operators in the following way: vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1; vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1; vw([F]A) = 1 iff for all w′ such that wRw′, vw(A) = 1; vw(⟨FA) = 1 iff for some w′ such that wRw′, vw(A) = 1.]

 

[Priest now gives the truth conditions for the tense operators. (Recall from section 3.6a.2 above that the R relation indicates successive order of times: “one thinks of the worlds of an interpretation as times (or maybe states of affairs at times), and the relation w1Rw2 as ‘w1 is earlier than w2’” (p.49, section 3.6a.2). So here, the first world in the wRw sequence comes before the second world, meaning really the first moment of a world comes before the second moment of that world. Now recall from section 3.6a.3 above that [P] means something like, “at all prior times...” and that it is equivalent to the necessity operator, except with the additional temporal qualification of holding for just a certain set of other worlds (times), namely, the ones in the past. As such, [P] would be true if for all worlds (times) in the past, the formula it modifies holds. And so [P], which means something like, “at some prior time...” and which is equivalent to the possibility modifier, would be true if there is at least one prior world (time) where the formula holds. The same would go for the future tense operators, which I will add to Priest’s explicit formulations for the past ones. But I may have them wrong, so please trust your own formulations over mine.

vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1

vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1

vw([F]A) = 1 iff for all w′ such that wRw′, vw(A) = 1

vw(⟨FA) = 1 iff for some w′ such that wRw′, vw(A) = 1

(50, with the future operator formulations being my guesses.)

]

The truth conditions for ⟨P⟩ and [P] are exactly the same as those for ⟨F⟩ and [F], except that the direction of R is reversed:

vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1

vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1

(50)

[contents]

 

 

 

 

 

3.6a.5

[Tense Logic as Kt]

 

[“If, in an interpretation, R may be any relation, we have the tense-logic analogue of the modal logic, K, usually written as Kt” (50).]

 

[I am not sure about the next point, so please consult the quotation below. I am guessing it is saying the following. We have distinguished R relations which relate a world to a past world from R relations that relate a world to a future world. But if we make it such that we consider all R relations alike, then we have a system no different from normal modal logic K. But maybe this is not what is meant. The footnote perhaps clarifies this matter, but I am not sure. The point might be the following, but I am guessing. We could distinguish two sorts of R relations, one for the past and one for the present. However, this is unnecessary, because we only would need to switch the places of the w and w′ in the formulation w′Rw to designate the other temporality. Overall I am confused, because still it would seem that we have different semantic rules than K, as we have two sets of the same rule. Let us look again at the rules in K for necessity and possibility and compare them with the tense rules:

For any world wW:

vw(◊A) = 1 if, for some w′W such that wRw′, vw′(A) = 1; and 0 otherwise.

vw(□A) = 1 if, for all w′ ∈ W such that wRw′, vw′(A) = 1; and 0 otherwise.

(Priest p.22, section 2.3.5)

 

vw([P]A) = 1 iff for all w′ such that w′Rw, vw(A) = 1

vw(⟨PA) = 1 iff for some w′ such that w′Rw, vw(A) = 1

vw([F]A) = 1 iff for all w′ such that wRw′, vw(A) = 1

vw(⟨FA) = 1 iff for some w′ such that wRw′, vw(A) = 1

(50, with the future operator formulations being my guesses.)

The necessity/possibility rules of K have the same structure as the future operator. But the ones for the past reverse the relation, and so I still am not grasping how it is an analogue of K. Given what is said later, I am going to make the following guess, which is likely wrong. Tense logic is called Kt, because it is structurally the same as K except it is endowed with an intuitive temporal sense by making specific semantic rules corresponding to the tense operators, which otherwise function exactly the same as necessity and possibility. Or maybe the idea is that an analogue need not be identical but simply be structured analogously.]

If, in an interpretation, R may be any relation, we have the tense-logic analogue of the modal logic, K, usually written as Kt.6

(50)

6. Generally speaking, modal logics with more than one pair of modal operators are called ‘multimodal logics’, and in an interpretation for such a logic there is an accessibility relation, RX, for each pair of operators, ⟨X⟩ and [X]. In tense logic, however, it is unnecessary to give an independent specification of RP, since this is just the converse of RF. That is, w1RPw2 iff w2RFw1.

(50)

[contents]

 

 

 

 

3.6a.6

[Tense Logic Tableau Rules]

 

[The tableaux rules for the tense operators is much like for necessity and possibility only we need to keep in mind the order of r formulations for the different tenses.]

 

[Priest now provides the following tableau rules for the tense operators. (See quotation below).]

Appropriate tableaux for Kt are easy. The rules for ⟨F⟩ and [F] are exactly the same as those for ◊ and □, and those for ⟨P⟩ and [P] are the same with the order of r reversed appropriately. Thus, we have:

 

Full Future

Development ([F]D)

[F]A,i

irj

A,j

 

(For all j)

 

Partial Future

Development (⟨F⟩D)

⟨F⟩A,i

irj

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Future

Development (¬[F]D)

¬[F]A,i

⟨F⟩¬A,i

 

Negated Partial Future

Development (¬⟨F⟩D)

¬⟨F⟩A,i

⟨F⟩¬A,i

 

Full Past

Development ([P]D)

[P]A,i

jri

A,j

 

(For all j)

 

Partial Past

Development (⟨P⟩D)

⟨P⟩A,i

jri

A,j

 

(j must be new: it cannot occur anywhere above on the branch)

 

Negated Full Past

Development (¬[P]D)

¬[P]A,i

⟨P⟩¬A,i

 

Negated Partial Past Development (¬⟨P⟩D)

¬⟨P⟩A,i

⟨P⟩¬A,i

(50, with my added names and other data at the bottoms)

In the first rule of each four, this is for all j; in the second, j is new.

(50)

[contents]

 

 

 

 

3.6a.7

[Tableau Example]

 

[Priest then gives a tableau example.]

 

[Priest now gives an example for a tableau. (See section 2.4 for how to construct modal tableaux.) We see from the example the way that the two tense operators interact.]

The main novelty in Kt is in the interaction between the future and past tense operators. Thus, for example, A⊢[P]⟨FA:

A⊢[P]⟨FA

1.

.

2.

.

3.

.

4a.

4b.

.

5.

.

6.

.

.

A,0

¬[P]⟨FA,0

P⟩¬⟨FA,0

1r0

¬⟨FA,1

[F]¬A,1

¬A,0

×

P

.

P

.

2¬[P]

.

3P

3P

.

4b¬⟨F

.

4a,5[F]

(1×6)

Valid

(50, enumerations and step accounting are my own and are not to be trusted)

We have the last line, since 1r0.

(50)

[contents]

 

 

 

 

3.6a.8

[Counter-Models]

 

[Priest then shows how to construct a counter-model in tense logic, using an example. (We use the same procedure given in section 2.4.7.)]

 

[Recall from section 2.4.7 how we construct counter-models in K:

Counter-models can be read off from an open branch of a tableau in a natural way. For each number, i, that occurs on the branch, there is a world, wi; wiRwj iff irj occurs on the branch; for every propositional parameter, p, if p, i occurs on the branch, vwi(p) = 1, if ¬p, i occurs on the branch, vwi(p) = 0 (and if neither, vwi(p) can be anything one wishes).

(p.27, section 2.4.7)

Priest says that we formulate counter models in Kt in the same way.]

Counter-models are read off from tableaux just as they are for K. For example, ⊬p ⊃ ([F]p ∨ [P]p). The tableau for this is:

p ⊃ ([F]p ∨ [P]p)

1.

.

2.

.

3.

.

4.

.

5.

.

6.

.

7.

.

8a.

8b.

.

9a.

9b.

.

¬(p ⊃ ([F]p ∨ [P]p)),0

p,0

¬([F]p ∨ [P]p),0

¬[F]p,0

¬[P]p,0

F¬p,0

P¬p,0

0r1

¬p,1

0r2

¬p,2

  

P

.

1¬

.

1¬

.

3¬∨

.

3¬∨

.

4¬[F]

.

5¬[P]

.

6⟨F

6⟨F

.

7⟨P

7⟨P

(open)

(51, enumerations and step accounting are my own and are not to be trusted)

This gives the counter-model which may be depicted as follows:

w2 xxxxw0 xxxx w1

¬p xxxxxpxxxx  x p

(51)

[contents]

 

 

 

 

3.6a.9

[Mirror Images and Converse Temporal Relations]

 

[We can think of time going in reverse, from the future, moving backward through the past, by taking the converse R relation (yRx becomes xŘy) (and/or by converting all F’s to P’s and vice versa ).]

 

[Priest next explains the notion of mirror image, where you replace all F’s with P’s and vice versa. I am not sure if the next idea is a result, another different thing, or an addition to that, but we can also say that the converse of any R relation is the reverse order of its terms, noted with Ř, such that yRx becomes xŘy. I am not sure what the intuitive notion is here, but I would guess that we might consider some timeline going from past to future and instead think of it as going from future to past, as if time reversed itself. I am guessing very wildly. At any rate, Priest’s next point is that in any such converse interpretation, whatever was valid or invalid in it will also be valid or invalid in the converse interpretation. Please read the quote, as I did not follow this one very well.]

If A is any formula, call the formula obtained by writing all ‘P’s as ‘F’s, and vice versa, its mirror image. Thus, the mirror image of [F]p ⊃ ¬ ⟨Pq is [P]p ⊃ ¬ ⟨Fq. Given any binary relation, R, let its converse, Ř, be the relation obtained by simply reversing the order of its arguments. Thus, xŘy iff yRx. It is clear that if we have any interpretation for Kt, the interpretation that is exactly the same, except that R is replaced by Ř, is just as good an interpretation. Moreover, in this interpretation, ⟨F⟩ and [F] behave in exactly the same way as ⟨P⟩ and [P] do in the original interpretation, and vice versa. Hence any inference is valid/invalid in Kt just if the inference obtained by replacing every formula by its mirror image is valid/invalid. So, for example, by3.6a.7, A⊢[F]⟨PA.

(51)

[contents]

 

 

 

 

 

From:

 

Priest, Graham. 2008 [2001]. An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.

 

 

 

 

 

.