28 Jul 2016

Nolt (11.4) Logics, ‘Inference in Leibnizian Logic,’ summary

 

by Corry Shores

[Search Blog Here. Index-tags are found on the bottom of the left column.]

 

[Central Entry Directory]

[Logic & Semantics, Entry Directory]

[John Nolt, entry directory]

[Nolt, Logics, entry directory]

 

[The following is summary. All boldface in quotations are in the original unless otherwise noted. Bracketed commentary is my own.]

 

 

 

Summary of

 

John Nolt

 

Logics

 

Part 4: Extensions of Classical Logic

 

Chapter 11: Leibnizian Modal Logic

 

11.4 Inference in Leibnizian Logic

 

 

 

 

Brief summary:

Making proofs using Leibnizian modal logic will involve all the rules from propositional logic along with the identity rules and the following seven additional rules.

DUAL Duality From either ◊Φ and
~□~Φ, infer the other; from either □Φ and
~◊~Φ, infer the other.
K K rule From
□(Φ → Ψ), infer
(□Φ → □Ψ).
T T rule From □Φ, infer Φ.
S4 S4 rule From □Φ, infer
□□Φ
B Brouwer rule From Φ, infer
□◊Φ.
N Necessitation If Φ has previously been proven as a theorem, then any formula of the form
□Φ may be introduced at any line of a proof.
□= Necessity of identity From
α = β, infer
□α = β.

(Nolt 328)

 

We do not use quantification in our proofs, because we will want to avoid the problem of predicating for non-existing objects.

 

 

Summary

 

Nolt will explain how proofs are made in Leibnizian modal propositional logic. We keep all the inference rules from classical propositional logic. But we add some new rules that will allow us to process modal operators. In this section we will not work with quantifiers, although we will look at inferences involving identity. The reason we will avoid quantifiers is that “the quantifier rules depend on how we resolve the question of predication for nonexisting objects” (Nolt 328). Later we will examine one system for doing this, namely free logic. It is called “free” because it is “free of the presupposition that every name always names some existing thing” (328).

 

So we keep the rules for classical propositional logic and also the classical rules for identity [We have not yet summarized Nolt’s exposition of the rules for propositional logic. As far as I can tell right now, they seem for the most part the same as the ones that Agler gives in his Symbolic Logic book. The rules and strategies from Agler’s  and Nolt’s books are compiled here.] We then add to them seven new inference rules.

 

DUAL Duality From either ◊Φ and
~□~Φ, infer the other; from either □Φ and
~◊~Φ, infer the other.
K K rule From
□(Φ → Ψ), infer
(□Φ → □Ψ).
T T rule From □Φ, infer Φ.
S4 S4 rule From □Φ, infer
□□Φ
B Brouwer rule From Φ, infer
□◊Φ.
N Necessitation If Φ has previously been proven as a theorem, then any formula of the form
□Φ may be introduced at any line of a proof.
□= Necessity of identity From
α = β, infer
□α = β.

(Nolt 328)

 

Nolt reminds us that we proved a number of these in section 11.2.1 and section 11.2.2, and the others can easily be shown to be valid as well (Nolt 328).

 

[Nolt’s next point seems to be that when a theorem is a valid formula, then it is true in all worlds on all valuations. He defined theorem previously (in section 4.4) in the following way: “Conclusions provable without assumptions are called theorems,” and he noted that “all the theorems of propositional logic are tautologies, and vice versa” (103). So if something is a tautology, it would seem obvious that it would be true in all worlds and on all valuations, as it cannot possibly be untrue no matter what truth values we assign the terms. This then explains the rule of necessitation. So when we use the rule of necessitation, we do not refer to other premises within the same proof but rather to theorems established in a previous proof. When we notate it in the justification, we just put the prior proved theorem without mentioning line numbers. Nolt, however, it seems will often footnote the page where the theorem was previously proven.]

The necessitation rule differs from the others in that it uses no premises but refers, rather, to theorems established by previous proofs. A theorem is a valid formula, a formula true in all worlds on all valuations. Therefore, if Φ is a theorem, □Φ and any formula of the form □Φ may be asserted anywhere in a proof without further assumptions. When we use the rule of necessitation, we annotate it by writing its abbreviation ‘N’ to the right of the introduced formula, followed by the previously proved theorem or axiom schema employed.

(328)

 

Nolt then notes that “These seven inference rules, together with the rules of classical propositional logic and the identity rules =I and =E, constitute a system of inference that is sound and complete with respect to a Leibnizian semantics for the modal propositional logic with identity”  (328). [We discussed =I and =E in section 8.5.] The rules other than =I, =E, and = (that is, the set of purely propositional rules) makes up a logic called S5. Nolt then says that in this section he will for the most part explore “the valid inferential patterns of S5” (329).

 

Nolt begins by proving the sequent:

P ⊢ ⋄P

(Nolt 329)

[As we can see, we have an atomic formula. In Nolt’s strategic rules, he has it in negated form, but no matter, we just consider it the same way. He has:

If the conclusion or subconclusion you are trying to prove is of the form: Then try this strategy:
Hypothesize Φ and work toward a subconclusion of the form
Ψ & ~Ψ
in order to obtain
~Φ by
~I.

 

Agler formulates this rule as: “SA#1(P,¬Q): If the conclusion is an atomic proposition (or a negated proposition), assume the negation of the proposition (or the non-negated form of the negated proposition), derive a contradiction, and then use ‘¬I’ or ‘¬E.’” So the first thing we will do after setting it up is to hypothesize the negated form of the conclusion. But recall from section 11.1 and from the Duality rule above that ◊Φ↔~□~Φ. So Agler will formulate the hypothesis as □~P, with that being equivalent to the negation of the conclusion ⋄P.

 

1. P A
2. |    □~P  H (for ~I)

 

So we will want to use negation introduction. Recall the rule is: “Given a hypothetical derivation of any formula of the form (Ψ & ~Ψ) from Φ, end the derivation and infer ~Φ.” So we will want a formula of the form P & ~P. We have P already in the main proof. Now we want to derive ~P. We have our new rule T, “From □Φ, infer Φ.” So we will use that, which will allow us to make the contradictory conjunction P & ~P.

 

1. P A
2. |    □~P  H (for ~I)
3. |    ~P 2 T
4. |    P & ~P 1,2 &I

 

Now we can use negation introduction to bring out ~□~P into the main proof, and we can then apply Duality to it to get our goal proposition, the original conclusion.]

 

1. P A
2. |    □~P  H (for ~I)
3. |    ~P 2 T
4. |    P & ~P 1,2 &I
5. ~□~P 2-4 ~I
6. ⋄P 5 DUAL

(Nolt 329)

 

Nolt explains the reasoning behind this proof in the following passage:

The strategy is an indirect proof. Recognizing initially that ‘⋄P’ is interchangeable with ‘~□~P, we hypothesize ‘□~P’ for reductio. Using the T rule, the contradiction is obtained almost immediately. This yields‘~□~P, which is converted into ‘⋄P’ by DUAL at line 6.

(329)

 

Nolt then turns to rules N and K. He says they are often used in conjunction with one another in order “to obtain modalized versions of various theorems and rules.” He shows this with the following example.

(P & Q) ⊢ P

(Nolt 329)

[This proof will use rule N (necessitation), and it will take a theorem that was proven in a prior chapter. So it is hard to figure this one out on our own. He will import that theorem, which is a conditional, and he will place the necessity operator in front of the whole theorem, using the necessitation rule. Then each part of the conditional will receive the necessity operator, by means of rule K, which opens the door to us using conditional elimination to derive the goal proposition.]

 

1. (P & Q) A
2. ((P & Q) → P) N ((P & Q) → P)
3. (P & Q) → P) 2 K
4. P 1, 3 →E

(Nolt 329)

 

Nolt will now show a more sophisticated case using rules N and K. [We note that rules N and K deal with the necessity operator. Nolt will show how they can be used to derive a formula with a possibility operator. It will involve using the duality rule to render the possibility operator into a form with the necessity operator, which will allow us to operate on the necessity-operated formulas obtained from N and K, then also to convert them back to possibility.]

⋄P ⊢ ⋄(P ∨ Q)

(Nolt 329)

 

1. ⋄P A
2. (~(P ∨ Q) → ~P) N (~(P ∨ Q) → ~P)
3. ~(P ∨ Q)  → □~P) 2 K
4. ~□~P 1 DUAL
5. ~□~(P ∨ Q) 3, 4 MT
6. ⋄(P ∨ Q) 5 DUAL

(Nolt 329)

 

Nolt again demonstrates the combined use of rules N and K in the following proof.

⊢ ⋄~P → ~P

(Nolt 329)

[Recall that the strategic rule for conditionals is: “Hypothesize Φ and work toward the subconclusion Ψ in order to obtain the conditional by →I.” Nolt will import a theorem, affix to it the necessity operator using N, then distribute that operator using K. This will allow him to find the consequent of the goal proposition. So he can use conditional introduction to derive the conclusion.]

 

1. |    ⋄~P H (for →I)
2. |    ~□~~P 1 DUAL
3. |    □(P → ~~P) N (P → ~~P)
4. |    □P → □~~P 3 K
5. |    ~□P 2, 4 MT
6. ⋄~P → ~P 1-5 →I

(Nolt 330)

 

But Nolt says that we might use a different strategy when proving the similar theorem:

~P → ~⋄P

(Nolt 330)

[So we recall again that to prove conditionals, we begin by hypothesizing the antecedent in hopes of deriving the consequent, so that the whole goal conditional can be derived in the main proof.

 

1. |    □~P H (for →I)

 

This means that we want to derive ~⋄P in this subproof. Since that is an atomic formula, we hypothesize its negation and look for a contradiction. We will find that formulation by using duality to create a form that contradicts the first hypothesized formula, and the rest follows as planned.]

 

1. |    □~P H (for →I)
2. |    |    ⋄P H (for ~I)
3. |    |    ~□~P 2 DUAL
4. |    |    □~P & ~□~P 1,3 &I
5. |    ~⋄P 2, 4 MT
6. ~P → ~⋄P 1-5 →I

(Nolt 330)

 

Nolt now shows again how we can use N and K together, with a new sequent.

□(P → Q) ⊢ □~Q → □~P

(Nolt 330)

[As he will import a theorem, it will be hard to figure this one out on our own. The basic idea is that by applying N and K to this theorem, we will be able to then use conditional elimination (modus ponens) to obtain a formula that will become the goal proposition after we apply rule K to it.]

 

1. (P → Q) A
2. ((P → Q) → (~Q → ~P)) N ((P → Q) →
(~Q → ~P))
3. (P → Q) → (~Q → ~P) 2 K
4. (~Q → ~P) 1,3 →E
5. ~Q → ~P 4 K

(Nolt 330)

 

Nolt now will show us how the B rule (Brouwer Rule) works. [Recall that it is: “From Φ, infer □◊Φ.”] Nolt will use this rule in the following proof.

P ⊢ P

(Nolt 330)

[Since we have an atomic formula as our goal proposition, we will hypothesize its negation. This means we also need to find a contradiction. This will be accomplished in a fairly complicated way. First Nolt will use the Brouwer rule to obtain a form of the hypothesis where there is both a necessity and possibility operator. He then will import a theorem of a conditional form, using N and K to introduce and distribute the necessity operator. Then by using duality and modus tollens, he will produce the parts we need for a contradiction. This will allow us to derive our goal proposition in the main proof.

 

1. P A
2. |    ~P H (for ~I)
3. |    ⋄P 2 B
4. |    □(⋄~P → ~P) N (⋄~P → ~P)
5. |    □⋄~P → □~P 4 K
6. |    ~□~P 1 DUAL
7. |    ~⋄~P 5, 6 MT
8. |    □⋄~P & ~□⋄~P 3, 7 &I
9. ~~P 2-8 ~I
10. P 9 ~E

(Nolt 330)

 

[Recall the S4 rule: From □Φ, infer □□Φ.] Nolt will now use the S4 rule to prove:

⊢ ⋄⋄P → ⋄P

(Nolt 330)

[Since we have a conditional structure, our strategy will be to hypothesize the antecedent and derive the conditional. So we hypothesize ⋄⋄P, with the aim of deriving ⋄P. Since ⋄P is atomic, that means we hypothesize its negation, and we try to find a contradiction. Finding that contradiction will not be easy. Nolt in the first place will not directly hypothesize ~⋄P. Recall that ⋄P is equivalent to ~□~P. So Nolt will hypothesize □~P. He will then use S4 to get □□~P. He will also use N to import a theorem and add the necessity operator, then use K to distribute it. This will allow us to derive a formulation that is contradictory with the first hypothesis we made. And the rest follows as planned.]

 

1. |   ⋄⋄P H (for →I)
2. |    | □~P H (for ~I)
3. |    | ~P 2 S4
4. |    | □(□~P → ~⋄P) N (□~P → ~⋄P)
5. |    | □□~P → □~⋄P 4 K
6. |    | □~⋄P 3,5 → E
7. |    | ~□~⋄P 1 DUAL
8. |    | □~⋄P & ~□~⋄P 6,7 &I
9. |    ~□~P 2-8 ~I
10. |    ⋄P 9 DUAL
11. ⋄⋄P → ⋄P 1-10 →I

(Nolt 331)

 

Nolt says that just as with propositional and predicate logic, we can use derived rules by listing “the previously proved sequent to the right, together with the line numbers of the premises ( if any) that are instances of the previously proved sequent's premises. (Rules derived from theorems have no premises, and we cite no lines for them)” (Nolt 331). [What Nolt will do it seems is use a previously proved sequent, which has a premise and conclusion already in the sequent. In the proof at hand, there will be a formula that is the same as the full premise of the previously proved sequent. So on that basis, we can introduce the conclusion of the previously proved sequent, it seems. This happens in line 4.]

□(P → Q) ⊢ ⋄P → ⋄Q

1. □(P → Q) A
2. |    ⋄P H (for →I)
3. |    |    □~Q H (for ~I)
4. |    |    □~Q → □~P
|    |
1□(P → Q) ⊢
□~Q → □~P
5. |    |    □~P 3,4 →E
6. |    |    ~□~P 1 DUAL
7. |    |    □~P & ~□~P 5,6 &I
8. |    ~□~Q 3-7 ~I
9. |    ⋄Q 8 DUAL
10. ⋄P → ⋄Q 2-9 →I

(Nolt 331)

 

Nolt’s next point is that whenever we prove a sequent, we have in a sense proven the validity of its structure, and it does not matter which terms are used in that formula. This includes even substituting an expression of the form ‘a = b’ for a letter ‘P’, as we will see  in lines 5 and 7 of the following proof.

~a = b ⊢ □~a=b

1. ~a=b A
2. |    ~□~a=b H (for ~I)
3. |    ⋄a=b 2 DUAL
4. |    □(a=b → □a=b) N (a=b → □a=b)
5. |    ⋄a=b → ⋄□a=b
|
4 □(P → Q) ⊢
⋄P → ⋄Q
6. |    ⋄□a=b 3,5 →E
7. |    a=b 6 ⋄□P ⊢ P
8. |    a=b & ~a=b 1,7 &I
9. ~~□~a=b 2-8 ~I
10. □~a=b 9 ~E

(Nolt 332)

 

Nolt then explains the important thing that this proof establishes, namely that rigid designation holds, because nonidentity is necessary [just as identity is necessary. In other words, I think the idea is that a name always has its designation, and in no world does it take a different designation.]

This proof shows that not only is identity necessary as the □= axiom schema asserts, but also nonidentity is necessary – a result fully appropriate in light of the semantics of rigid designation.

(Nolt 332)

 

The next proof will show that whatever is possible is necessarily possible, or that:

⋄P ⊢ □⋄P

(Nolt 332)

[To do this, Nolt will import a theorem and add the necessity operator in line 3, then distribute the operator using K. For this proof, also recall the Brouwer rule: From Φ, infer □◊Φ.]

 

1. ⋄P A
2. □⋄⋄P 1B
3. □(⋄⋄P → ⋄P) N (⋄⋄P → ⋄P)
4. □⋄⋄P → □⋄P 3 K
5. □⋄P 2,4 →E

(Nolt 332)

 

The next thing that Nolt proves is interesting, namely that whatever is possibly necessary is in fact necessary too. [I am not sure how to conceive this, but perhaps the idea is the following. If something is possible, that it is so on at least one world. This means that on at least one world something is necessary. But if it is necessary on one world, then it is the case on all worlds. Thus possible necessity implies unqualified necessity. But the proof works differently than that reasoning.]

 

1. ⋄□P A
2. □(⋄□P → P) N (⋄□P → P)
3. □⋄□P → □P 2 K
4. □⋄□P 1 ⋄P ⊢ □⋄P
5. □P 3,4 →E

(Nolt 332)

 

 

 

 

From:

 

Nolt, John. Logics. Belmont, CA: Wadsworth, 1997.

 

 

.

No comments:

Post a Comment