## 26 Jun 2016

### Frege (§11) Begriffsschrift, Chapter 1 (Geach transl.), “Generality", summary

by Corry Shores
[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Central Entry Directory]
[Logic & Semantics, Entry Directory]
[Gottlob Frege, Entry Directory]
[Frege, Begriffsschrift, Chapter 1, Entry Directory]

[The following is summary. Bracketed commentary is my own. Please forgive my typos, as proofreading is incomplete.]

Summary of

Gottlob Frege

Begriffsschrift, Chapter 1
(Geach transl.)

§11 Generality

Brief Summary:
In Frege’s notation system, we denote universal quantification by making an indent into the judgment stroke, above which goes the universally quantified variable, and that variable is also placed in parentheses next to the predicate symbol.
The scope is determined by which stroke (and which part of the stroke) the indent is located on. For example:

Summary

[Recall from section 10 that we can formulate a function using a judgment stroke. For example, we can express an indeterminate function of argument A, that is, Φ(A) with the judgment stroke as
which is read, “A has the property Φ”. An indeterminate function for two arguments may be written Ψ(A,B) and with the judgment stroke as
I am not entirely certain, but Frege’s point now seems to be that in these prior cases, A and B were determinate arguments. In other words, they seem to be like constants that are being predicated. We can instead think of them as being like algebraic variables. Since he says that the notation goes back to what we see above when we substitute something for the letter, that makes me think it is something like a variable. He will give a separate notation for this other situation. The Gothic letter seems like it is the predicate’s variable, and the concave indent into the judgment stroke above which the variable is written again seems to indicate that it is universal quantification. Many details are unclear to me, so let me quote for now:]
In the expression for a judgment, the complex symbol to the right of ⊢ may always be regarded as a function of one of the symbols that occur in it. Let us replace this argument with a Gothic letter, and insert a concavity in the content-stroke, and make this same Gothic letter stand over the concavity: e.g.:
This signifies the judgment that the function is a fact whatever we take its argument to be. A letter used as a functional symbol, like Φ in Φ(A), may itself be regarded as the argument of a function; accordingly, it may be replaced by a Gothic letter, used in the sense I have just specified. The only restrictions imposed on the meaning of a Gothic letter are the obvious ones: (i) that the complex of symbols following a content-stroke must still remain a possible content of judgment (§2); (ii) that if the Gothic letter occurs as a functional symbol, account must be taken of this circumstance. All further conditions imposed upon the allowable substitutions for a Gothic letter must be made part of the judgment. From such a judgment, therefore, we can always deduce any number we like of judgments with less general content, by substituting something different each time for the Gothic letter; when this is done, the concavity in the content-stroke vanishes again. The horizontal stroke that occurs to the left of the concavity in
is the content-stroke for [the proposition] that Φ(ɑ) holds good whatever is substituted for ɑ; the stroke occurring to the right of p. 2o] the concavity is the content-stroke of Φ(ɑ) – we must here imagine something definite substituted for ɑ.
(Frege 16)
[Frege’s next point will be about how we can understand the meaning of these formulations when we remove the judgment stroke. See section 2. We would just have the content of the judgment, but not the claim that it is the case, I think.]
By what was said before about the meaning of the judgment-stroke, it is easy to see what an expression like
| means.
(16-17)
[For the next point, let us recall from section 5 how the conditionals are structured. When we have
This would be read, “If B, then A.” Let me quote the following, then discuss it.]
This expression may occur as part of a judgment, as in
(17)
[Here I think he is simply saying that the universally quantified expressions can be found in more complex structures.]
[I do not quite get the next point. He might be saying the following. Suppose we say that for all x, x is P. Now let us negate that. It is not the case that for all x, x is P. This can still mean that there some substitution, like Pa, that is true. The point is that not all the substitutions are true. So begin again with the simple case of: for all x, x is P. Here, we can infer that any possible substitution will be true. However, from ‘it is not the case that for all x, x is P’, we cannot infer anything, because we do not know which if any of the constants is P. This idea also holds in the conditional structure. Suppose we have ‘if for all x, x is P, then A.’ From this we likewise cannot infer that ‘if Pa, then A’. The reasoning for this instance I am less certain about. One possibility is that the whole conditional can be true if the antecedent is false. So supposing that the antecedent is false in our example, we cannot infer that ‘if Pa, then A’ because perhaps Pa makes the antecedent true while A is false, and thus the whole conditional is false.
It is obvious that from these judgments we cannot infer less general judgments by substituting something definite for ɑ, as we can from
[seems to end the sentence here.]
serves to deny that Χ(ɑ) is always a fact whatever we substitute for ɑ. But this does not in any way deny the possibility of giving ɑ some meaning Δ such that X(Δ) is a fact.
means the case in which
is affirmed and A denied does not occur. But this does not in any way deny the occurrence of the case in which X(Δ) is affirmed and A denied; for, as we have just seen, X(Δ) may be affirmed and nevertheless
denied. Thus here likewise, we cannot make an arbitrary substitution for a without prejudice to the truth of the judgment.
(17)
[Frege will say that this is why we need to indicate the scope of the quantifier. I am not sure how that would change the prior cases. Before we gave the example, ‘if for all x, x is P, then A.’ Perhaps he is saying that the situation would be different if we instead said, ‘for all x, if x is P, then A.’ I am not sure. But maybe in this case we can derive X(Δ). I do not see how, but I also do not yet understand how the example we just saw demonstrates the need for indicating scope. At any rate, he further shows how quantifier scope works in his system.]
This explains why we need the concavity with the Gothic letter written on it; it delimits the scope of the generality signified by the letter. A Gothic letter retains a fixed meaning only within its scope; the same Gothic letter may occur within various scopes in the same judgment, and the meaning we may ascribe to it in one scope does not extend to any other scope. The scope of one Gothic letter may include that of another, as is shown in p. 21]
In this case different letters must be chosen; we could not replace e by a. It is naturally legitimate to replace a Gothic letter everywhere in its scope by some other definite letter, provided that there are still different letters standing where different letters | stood before. This has no effect on the content. Other substitutions are permissible only if the concavity directly follows the judgment stroke, so that the scope of the Gothic letter is constituted by the content of the whole judgment. Since this is a specially important case, I shall introduce the following abbreviation: an italic letter is always to have as its scope the content of the whole judgment, and thus scope is not marked out by a concavity in the content stroke. If an italic letter occurs in an expression not preceded by a judgment stroke, the expression is senseless.
(Frege 18)

[I may not be following the ideas of italic and Gothic letters very well. In the German version the differences might be a little more apparent. Here are some examples.
(Begriffsschrift und andere Aufsätze p.21)
So we have lower case italic (Roman) letters, lowercase Gothic, and uppercase Greek. There is also the possibility that there are uppercase Roman letters, like A, B, and P, but I am inclined to think they are uppercase Greek, as I do not know why predicates would be distinguished. My guess is that predicates take uppercase Greek letters; predicate constants take lowercase Roman letters; and predicate variables take Gothic letters. It seems that what he says next is something like universal introduction. See Agler Symbolic Logic section 8.1.3.]
An italic letter may always be replaced by a Gothic letter that does not yet occur in the judgment; in this case the concavity must be inserted immediately after the judgment-stroke. E.g. for
we may put
since a occurs only in the argument-position within X(a).
Likewise it is obvious that from
we may deduce
if A is an expression in which a does not occur, and a occupies only argument-positions in Φ(a). If
is denied, we must be able to specify a meaning for a such that Φ(a) is denied. Thus if
were denied and A affirmed, we should have to be able to specify a meaning for a such that A was affirmed and Φ(a) denied.
(18)
[I may be missing his next point. It might be that since we have a conditional, our substitution cannot make it that the antecedent is affirmed and the consequent is denied. Let me quote.]
But since we have
we cannot do so; for this formula means that whatever a may be the case in which Φ(a) would be denied and A affirmed does not | occur. Hence we likewise cannot both deny
and affirm A: i.e.
... Similarly when we have several conditional strokes.
(Frege 18-19)

Frege, Gottlob. “Begriffsschrift (Chapter 1)”. Transl. P.T. Geach. In Translations from the Philosophical Writings of Gottlob Frege. Eds. P.T. Geach and Max Black. Oxford: Basil Blackwell, 1960, second edition (1952 first edition).

An image comes from:
Frege, Gottlob. Begriffsschrift und andere Aufsätze. 2nd edn.  Hildesheim/Zürich: Geor Olms, 1993.
.
.

## 24 Jun 2016

### Nolt (11.1) Logics, ‘Modal Operators’, summary

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Logic & Semantics, Entry Directory]

[John Nolt, entry directory]

[Nolt, Logics, entry directory]

[The following is summary. All boldface in quotations are in the original unless otherwise noted. Bracketed commentary is my own.]

Summary of

John Nolt

Logics

Part 4: Extensions of Classical Logic

Chapter 11: Leibnizian Modal Logic

11.1 Modal Operators

Brief summary:

There are certain operators called alethic modifiers. They include deontic (or ethical) modalities (e.g. ‘it ought to be the case that’, ‘it is forbidden that’); propositional attitudes (‘believes that’, ‘knows that’, ‘hopes that’, ‘wonders whether’); and tenses (‘was’, ‘is’ , and ‘will be’). Propositional attitude modifiers are binary, but all the rest are monadic. ‘It is necessary that Φ’ is written  □Φ and ‘it is possible that Φ’ is written ◊Φ. Alethic modifiers are often duals meaning that one can be converted into another by adding negations around the symbols, as for example:

□Φ↔~◊~Φ

◊Φ↔~□~Φ

Summary

There are operators that are inexpressible in predicate logic but nonetheless are still very important called alethic modifiers, which includes ‘must,’ ‘might,’ ‘could,’ ‘can,’ ‘have to’, ‘possibly,’ ‘contingently,’ ‘necessarily’ (Nolt 307). These words express modes of truth or alethic modalities. (The Greek word for truth is alethea, hence their name alethic modifiers). Modal logic studies the syntax and the semantics of alethic modalities (Nolt 307).

Modal logic also studies other kinds of propositional modalities, namely, {1} deontic (or ethical) modalities, which are “expressed by such constructions as ‘it ought to be the case that’, ‘it is forbidden that’, etc.” (Nolt 307); {2} propositional attitudes, which are “relations between sentient beings and propositions, expressed by such terms as ‘believes that’, ‘knows that’, ‘hopes that’, ‘wonders whether’, and so on”; and {3} tenses, which include past, present, and future tenses as expressed by the various modifications of the verb ‘to be’: ‘was’, ‘is’ , and ‘will be’ ” (Nolt 307).

All these alethic modalities can be understood as operators on propositions (Nolt 307). [We will see how the alethic modalities can be placed upon a proposition just like other logical operators can. The following is quotation.]

Consider, for | example, these sentences, all of which involve the application of modal operators (in the broad sense) to the single proposition ‘People communicate’:

Alethic Operators

It is possible that people communicate.

It must be the case that people communicate.

It is contingently the case that people communicate.

It could be the case that people communicate.

It is necessarily the case that people communicate.

Deontic Operators

It is obligatory that people communicate.

It is permissible that people communicate.

It is not allowed that people communicate.

It should be the case that people communicate.

Operators Expressing Propositional Attitudes

Ann knows that people communicate.

Bill believes that people communicate.

Cynthia fears that people communicate.

Don supposes that people communicate.

Everyone understands that people communicate.

Fred doubts that people communicate.

Operators Expressing Tenses

It was (at some time) the case that people communicated.

It was always the case that people communicated.

It will (at some time) be the case that people communicate.

(Nolt 307-308)

This is not even an exhaustive list of the operators in each category.  Nolt also notes that except for the propositional attitude operators, the rest here are monadic.

With the exception of the operators expressing propositional attitudes, all of those listed here are monadic; they function syntactically just like the negation operator ‘it is not the case that’, prefixing a sentence to produce a new sentence. Thus, for example, the operators ‘it is necessary that’, usually symbolized by the box ‘□’ and ‘it is possible that’, usually | symbolized by the diamond sign ‘◊’, are introduced by adding this clause to the formation rules:

If Φ is a formula, then so are □Φ and ◊Φ.

(Nolt 308-309)

The operators for propositional attitudes are binary operators. However, he continues, “unlike such binary operators as conjunction or disjunction, which unite a pair of sentences into a compound sentence, propositional attitude operators take a name and a sentence to make a sentence. The place for this name may be quantified, as in ‘Everyone understands that people communicate’ ” (Nolt 309).

Many operators can be converted into one another through negations that flank the signs.

Many modal operators have duals – operators which, when flanked by negation signs, form their equivalents. The operators ‘□’ and ‘◊’, for example, are duals, as the following sentences assert:

□Φ↔~◊~Φ

◊Φ↔~□~Φ

That is, it is necessary that Φ if and only if it is not possible that not-Φ, and it is possible that Φ if and only if it is not necessary that not-Φ.

There are other duals among these operators as well. Consider the deontic operator ‘it is obligatory that’, which we shall symbolize as ‘O’, and the operator ‘it is permissible that’, which we shall write as ‘P’. These are similarly related:

OΦ↔~P

PΦ↔~O

That ‘O’ and ‘P’ should thus mimic ‘□’ and ‘◊’ is understandable, since obligation is a kind of moral necessity and permission a kind of moral possibility.

There are also epistemic (knowledge-related) duals. The operator ‘knows that’ is dual with the operator ‘it is epistemically possible, for ... that’ – the former representing epistemic necessity (knowledge) and the latter epistemic possibility. (Something is epistemically possible for a person if so far as that person knows it might be the case.) Symbolizing ‘knows that’ by ‘K’ and ‘it is epistemically possible for ... that’ by ‘E’, we have:

pKΦ↔~pE

pEΦ↔~pK

In English: p knows that Φ if and only if it is not epistemically possible for p that not-Φ; and it is epistemically possible for p that Φ if and only if p does not know that not-Φ (‘p’, of course, stands for a person).

There are temporal duals as well. Let ‘P’ mean ‘it was (at some time) the case that’ and ‘H’ mean ‘it has always been the case that’. Then:

HΦ↔~P

PΦ↔~H

| Here ‘H’ represents a kind of past tense temporal necessity and ‘P’ a kind of past tense temporal possibility. A similar relationship holds between ‘it always will be the case that’ and ‘it sometimes will be the case that  and between other pairs of temporal operators.

(Nolt 309-310)

Nolt then notices how these duals remind us of two laws in predicate logic

∀Φ↔~∃~Φ

∃Φ↔~∀~Φ

(Nolt 310)

And he wonders if the duals are analogous to quantifiers (Nolt 310).

Nolt, John. Logics. Belmont, CA: Wadsworth, 1997.

.

.

### Nolt, Logics, entry directory

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Logic & Semantics, Entry Directory]
[John Nolt, entry directory]

[Note, starting with section 16.2, the paragraphs are enumerated, and the larger sections are divided by the numbered paragraphs that are summarized in that post.]

An entry directory like this but with the brief summaries included can be found at:
http://piratesandrevolutionaries.blogspot.com/2018/03/nolt-cbs-logics-collected-brief.html

Entry Directory for

John Nolt

Logics

Part 3: Classical Predicate Logic

Chapter 6: Classical Predicate Logic: Syntax

Chapter 8: Classical Predicate Logic: Inference

Part 4: Extensions of Classical Logic

Chapter 11: Leibnizian Modal Logic

Chapter 12: Kripkean Modal Logic

Chapter 13: Deontic and Tense Logics

13.2 A Modal Tense Logic

Chapter 14: Higher-Order Logics

Part 5: Nonclassical Logics

Chapter 15: Mildly Nonclassical Logics

15.3 Supervaluations

16.2 Intuitionistic Logics

16.3 Relevance Logics

Nolt, John. Logics. Belmont, CA: Wadsworth, 1997.

.
.

### John Nolt, entry directory

[
Search Blog Here. Index-tags are found on the bottom of the left column.]

[Central Entry Directory]
[Logic & Semantics, Entry Directory]

Entry Directory for

John Nolt

Logics

Nolt, Logics, entry directory

Image from:

https://works.bepress.com/john_nolt/

.

### Agler (8.3) Symbolic Logic: Syntax, Semantics, and Proof, 'Sample Proofs', summary

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Agler’s Symbolic Logic, entry directory]

[The following is summary. Boldface (except for metavariables) and bracketed commentary are my own. Please forgive my typos, as proofreading is incomplete. I highly recommend Agler’s excellent book. It is one of the best introductions to logic I have come across.]

Summary of

David W. Agler

Symbolic Logic: Syntax, Semantics, and Proof

Ch.8: Predicate Logic Derivations

8.3 Sample Proofs

Brief summary:

Agler illustrates the rules of the derivation system for making proofs in the language of predicate logic with a set of examples.

Summary

Agler will shows us how to use the quantifier rules for making proofs by working through a series of examples.

The first one is for:

(∀x)(Ax→Bx), (∃x)¬Bx ⊢ (∃x)¬Ax

(Agler 350)

[We can see that we will probably want to use modus tollens to derive the negation of the antecedent in the first premise. We cannot do this directly. So we will use an existential elimination assumption for premise 2. Then we can use universal elimination to give us the conditional in a form that we can work with. Then we can get a negated antecedent with a constant. This will allow us to use existential introduction to get the goal proposition, which we will move into the main proof using existential elimination.]

 1 (∀x)(Ax→Bx) P 2 (∃x)￢Bx P 3 | ￢Ba A/∃E 4 | Aa→Ba 1∀E 5 | ￢Aa 3,4MT 6 | (∃x)￢Ax 5∃I 7 (∃x)￢Ax 2,3–6∃E

(Agler 350)

Agler then shows another proof for

(∀x)(∀y)(Pxy∧Qxy), (∀z)Pzz→Sb ⊢ (∃x)Sx

(Agler 350)

[For this, we  see that we could use existential introduction on the consequent in premise 2 to get the goal proposition. For that we should obtain the antecedent. We could get that by using universal introduction on a formula of the form Paa. We could obtain such a formula but using universal elimination on the conjunction in the first premise and then using conjunction elimination to extract the first conjunct.]

 1 (∀x)(∀y)(Pxy∧Qxy) P 2 (∀z)Pzz→Sb P/(∃x)Sx 3 (∀y)(Pay∧Qay) 1∀E 4 Paa∧Qaa 3∀E 5 Paa 4∧E 6 (∀z)Pzz 5∀I 7 Sb 2,6→E 8 (∃x)Sx 7∃I

(Agler 350)

Agler then has us consider one final example, a theorem.

⊢ (∀x)[¬(Qx→Rx)→¬Px]→(∀x)[¬(Px→Rx)→¬(Px→Qx)]

(Agler 350)

[Since we have a conditional, the strategy will be to assume the antecedent and derive the consequent. So first we make that assumption. Next what we should do is build the main consequent of the conclusion first with the same constants and then use universal introduction to get the form we need. So we will want ¬(Pa→Ra)→¬(Pa→Qa). Again we will assume this antecedent and derive the consequent. At this point, in order to derive ¬(Pa→Qa), we will assume the unnegated form and seek a contradiction, and then use negation introduction. Agler will do this through a series of steps that do not come intuitively to me so I will leave the reasoning to be seen in the proof.]

 1 | (∀x)[￢(Qx→Rx)→￢Px]    |    |    | A/(∀x)[￢(Px→Rx)→￢(Px→Qx)] 2 |   | ￢(Pa→Ra)    |   | A/￢(Pa→Qa) 3 |   |   | Pa→Qa A/P∧￢P 4 |   |   | ￢(Qa→Ra)→￢Pa 1∀E 5 |   |   | ￢(￢Pa∨Ra) 2IMP 6 |   |   | Pa∧￢Ra 5DEM+DN 7 |   |   | Pa 6∧E 8 |   |   | Qa 3,7→E 9 |   |   | ￢￢Pa 7DN 10 |   |   | Qa→Ra 4,9MT+DN 11 |   |   | Ra 8,10→E 12 |   |   | ￢Ra 6∧E 13 |   | ￢(Pa→Qa) 3–12￢I 14 | ￢(Pa→Ra)→￢(Pa→Qa) 2–13→I 15 | (∀x)[￢(Px→Rx)→￢(Px→Qx)] 14∀I 16 (∀x)[￢(Qx→Rx)→￢Px]→(∀x)[￢(Px→Rx)→￢(Px→Qx)] 1–15→I

(Agler 351)

Agler, David. Symbolic Logic: Syntax, Semantics, and Proof. New York: Rowman & Littlefield, 2013.

Some changes to the book quotations may have been made, as based on: http://markdfisher.com/wp-content/uploads/2014/02/PHIL_012_ONLINE_SYLLABUS_SP14-3-1.pdf .

.

### Agler (8.2) Symbolic Logic: Syntax, Semantics, and Proof, 'Quantifier Negation (QN)', summary

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Agler’s Symbolic Logic, entry directory]

[The following is summary. Boldface (except for metavariables) and bracketed commentary are my own. Please forgive my typos, as proofreading is incomplete. I highly recommend Agler’s excellent book. It is one of the best introductions to logic I have come across.]

Summary of

David W. Agler

Symbolic Logic: Syntax, Semantics, and Proof

Ch.8: Predicate Logic Derivations

8.2 Quantifier Negation (QN)

Brief summary:

The four underived quantifier rules for making proofs in the language of predicate logic (RL) made a derivation system called RD. To this we add the derived equivalence rule quantifier negation (QN) to make the deduction system RD+. And since it is an equivalence rule, QN can apply to quantifiers that are not main operators.

 Quantifier Negation (QN) From a negated universally quantified expression ‘¬(∀x)P,’ an existentially quantified expression ‘(∃x)¬P’ can be derived, and vice versa. Also, from a negated existentially quantified expression ‘¬(∃x)P,’ a universally quantified expression ‘(∀x)¬P’ can be inferred, and vice versa. ¬(∀x)P ⊣ ⊢ (∃x)¬P ¬(∃x)P ⊣ ⊢ (∀x)¬P QNQN
(Agler 348)

Summary

In the prior section 8.1, we devised a system of deduction for making proofs in the language of predicate logic (RL). That system is called RD. We will now add an equivalence rule, namely, quantifier negation (QN) so that we can complete our proofs more efficiently, and all the rules together with this new addition make RD+. The new rule will help us deal with negated quantifiers.

 Quantifier Negation (QN) From a negated universally quantified expression ‘¬(∀x)P,’ an existentially quantified expression ‘(∃x)¬P’ can be derived, and vice versa. Also, from a negated existentially quantified expression ‘¬(∃x)P,’ a universally quantified expression ‘(∀x)¬P’ can be inferred, and vice versa. ¬(∀x)P ⊣ ⊢ (∃x)¬P ¬(∃x)P ⊣ ⊢ (∀x)¬P QNQN
(Agler 348)

Agler gives this simple illustration.

 1 ¬(∀x)Px P 2 (∃x)¬Px 1QN 3 ¬(∀x)Px 2QN

(Agler 348)

He gives another example that works the same way, but now with a conjunction.

 1 ¬(∃z)(Wzz∧Mz) P 2 (∀z)¬(Wzz∧Mz) 1QN 3 ¬(∃z)(Wzz∧Mz) 2QN

(Agler 349)

Agler then shows how this basic procedure works when we have double negations.

 1 ￢(∀z)￢(Wzz→￢Mz) P 2 (∃x)(Px∧Rx) P 3 (∃z)￢￢(Wzz→￢Mz) 1QN 4 (∃z)(Wzz→￢Mz) 3DN 5 ￢￢(∃x)(Px∧Rx) 2DN 6 ￢(∀z)￢(Wzz→￢Mz) 3QN

(Agler 349)

Something very important to note is that “Since quantifier negation is an equivalence rule, it can be applied to subformulas within wffs and not merely to whole propositions whose main operator is the negation” (Agler 349). Agler shows this in the following proof.

 1 ￢(∀z)￢(∃y)[Wzy→￢(∀x)￢My] P 2 ￢(∀z)￢(∃y)[Wzy→(∃x)￢￢My] 1QN 3 (∃z)￢￢(∃y)[Wzy→(∃x)￢￢My] 2QN 4 (∃z)(∃y)[Wzy→(∃x)￢￢My] 3DN 5 (∃z)(∃y)[Wzy→(∃x)My] 4DN

(Agler 349)

As we can see, in line 2 we used quantifier negation on a quantifier that is not the main operator.

Agler notes that QN is a derived rule, which means it can be proved using the other underived quantifier quantifiers rules. This means that we need to make proofs that show:

¬(∀x)P⊣ ⊢ (∃x)¬P
¬(∃x)P⊣ ⊢ (∀x)¬P

Agler will make a proof for just ¬(∀x)P⊢(∃x)¬P, and he leaves the remaining three cases for exercises.

 1 ￢(∀x)Px P/(∃x)￢Px 2 | ￢(∃x)￢Px A/contra 3 |    | ￢Pa A/contra 4 |    | (∃x)￢Px 3∃I 5 |    | ￢(∃x)￢Px 2R 6 | Pa 3–5￢E 7 | (∀x)Px 6∀I 8 | ￢(∀x)Px 1R 9 (∃x)￢Px 2–8￢E

(Agler 349)

Agler, David. Symbolic Logic: Syntax, Semantics, and Proof. New York: Rowman & Littlefield, 2013.

Some changes to the book quotations may have been made, as based on: http://markdfisher.com/wp-content/uploads/2014/02/PHIL_012_ONLINE_SYLLABUS_SP14-3-1.pdf .

.

### Agler (8.1) Symbolic Logic: Syntax, Semantics, and Proof, "Four Quantifier Rules", summary

[Search Blog Here. Index-tags are found on the bottom of the left column.]

[The following is summary. Boldface (except for metavariables) and bracketed commentary are my own. Please forgive my typos, as proofreading is incomplete. I highly recommend Agler’s excellent book. It is one of the best introductions to logic I have come across.]

Summary of

David W. Agler

Symbolic Logic: Syntax, Semantics, and Proof

Ch.8: Predicate Logic Derivations

8.1 Four Quantifier Rules

Brief summary:
For the language of predicate logic (RL), there are four proof derivation rules to add to those of propositional logic (PL), namely:

 Universal Elimination (∀E) From any universally quantified proposition ‘(∀x)P,’ we can derive a substitution instance ‘P(a/x)’ in which all bound variables are consistently replaced with any individual constant (name). (∀x)P P(a/x) ∀E

 Existential Introduction (∃Ι) From any possible substitution instance ‘P(a/x),’ an existentially quantified proposition ‘(∃x)P’ can be derived by consistently replacing at least one individual constant (name) with an existentially quantified variable. P(a/x) (∃x)P ∃I

 Universal Introduction (∀Ι) A universally quantified proposition ‘(∀x)P’ can be derived from a possible substitution instance ‘P(a/x)’ provided (1) ‘a’ does not occur as a premise or as an assumption in an open subproof, and (2) ‘a’ does not occur in ‘(∀x)P.’ P(a/x) (∀x)P ∀I

 Existential Elimination (∃E) From an existentially quantified expression ‘(∃x)P,’ an expression ‘Q’ can be derived from the derivation of an assumed substitution instance ‘P(a/x)’ of ‘(∃x)P’ provided (1) the individuating constant ‘a’ does not occur in any premise or in an active proof (or subproof) prior to its arbitrary introduction in the assumption ‘P(a/x),’ and (2) the individuating constant ‘a’ does not occur in proposition ‘Q’ discharged from the subproof. (∃x)P    | P(a/x)    | .    | .    | .    | Q Q ∃E

This new system of derivation is called RD. And corresponding to these are four additional strategic rules for making proofs in RL.
SQ#1(∀E): When using (∀E), the choice of substitution instances ‘P(a/x)’ should be guided by the individual constants (names) already occurring in the proof and any individual constants (names) occurring in the conclusion.
(Agler 328)
SQ#2(∃I): When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such that a use of (∃I) will result in the desired conclusion. (In other words, if the ultimate goal is to derive ‘(∃x)Px,’ aim to derive a substitution instance of ‘(∃x)Px,’ like ‘Pa,’ ‘Pb,’ ‘Pr,’ so that a use of (∃I) will result in ‘(∃x)Px.’)
SQ#3(∀I): When the goal proposition is a universally quantified proposition ‘(∀x)P,’ derive a substitution instance ‘P(a/x)’ such that a use of (∀I) will result in the desired conclusion.
SQ#4(∃E) Generally, when deciding upon a substitution instance ‘P(a/x)’ to assume for a use of (∃E), choose one that is foreign to the proof.

Summary

Ch.8: Predicate Logic Derivations

We will now add new derivation rules for the language of predicate logic (RL) to the ones from the language of propositional logic (PL).

8.1 Four Quantifier Rules

The natural deduction system for RL that Agler will lay out is called RD, and it “consists of four quantifier derivation rules” (Agler 325). Two of the rules (‘∀E’ and ‘∃E’) are for eliminating quantifiers and two others (‘∀I’ and ‘∃I’) introduce them (Agler 325).

 Universal Elimination (∀E) From any universally quantified proposition ‘(∀x)P,’ we can derive a substitution instance ‘P(a/x)’ in which all bound variables are consistently replaced with any individual constant (name). (∀x)P P(a/x) ∀E
(Agler 325)

Agler gives this simple example .

 1 (∀x)Px P 2 Pa 1∀E

(Agler 325)

For this derivation, the substitution of constant 'a' for x is possible because the x is bound by the quantifier. And we can choose whichever constant we want from the domain.but we can derive any other constant we want instead or in addition, as seen below.

 1 (∀x)Px P 2 Pa 1∀E 3 Pb 1∀E 4 Pc 1∀E 5 Pd 1∀E 6 Pe100 1∀E

(Agler 326)

Note also that all bound variables of a universal quantifier must be substituted consistently.

 1 (∀x)Pxx P 2 Paa 1∀E 3 Pbb 1∀E

(Agler 326)

And thus we cannot do the following.

 1 (∀x)Pxx P 2 Pab 1∀E–NO! 3 Pba 1∀E–NO!
(Agler 326)

Agler first will show us an argument in English.

 1 Everyone is a person. P 2 If Alfred is a person, then Bob is a zombie. P 3 Alfred is a person. 1∀E 4 Therefore, Bob is a zombie. 2,3→E
(Agler 326)

Formulated in RL, the above argument would look like the following.
(∀x)Px, Pa→Zb ⊢ Zb

 1 (∀x)Px P 2 Pa→Zb P 3 Pa 1∀E 4 Zb 2,3→E

Agler then gives another proof for.
(∀x)[Px→(∀y)(Qx→Wy)],Pb∧Qb ⊢ Wt
Let us set it up.

 1 (∀x)[Px→(∀y)(Qx→Wy)] P 2 Pb∧Qb P

[We see that if we could affirm the antecedent of the conditional in line 1, then we could derive the consequent formulation. We will be able to substitute what we want for the antecedent, so we should derive merely Pb from 2 for this purpose, then use universal elimination to make the antecedent be Pb, then use conditional elimination to derive the consequent.]

 1 (∀x)[Px→(∀y)(Qx→Wy)] P 2 Pb∧Qb P 3 Pb 2∧E 4 Pb→(∀y)(Qb→Wy) 1∀E 5 (∀y)(Qb→Wy) 3,4→E

[Agler will have us note how in line 4 when we used universal elimination that there were two bound instances of x that we substituted. He also has us note that in line 4, ∀y is not the main operator, which is why the rule did not apply to it in that instance.] [Now we know that we need to derive Wt eventually. We see that we could potentially obtain it by affirming Qb and using universal elimination to make the consequent Wt, in line 5. So let us get Qb, use it in this way, and derive our goal proposition.]

 1 (∀x)[Px→(∀y)(Qx→Wy)] P 2 Pb∧Qb P 3 Pb 2∧E 4 Pb→(∀y)(Qb→Wy) 1∀E 5 (∀y)(Qb→Wy) 3,4→E 6 Qb→Wt 5∀E 7 Qb 2∧E 8 Wt 6,7→E
(Agler 327)

The question still remains, how do we know which constants to choose when using universal elimination? Agler says this decision is guided by two considerations.
(1) It is guided by individual constants already occurring in the proof.
(2) It is guided by individual constants in the conclusion.
(Agler 328)
So let us look more at the first concern for the constants already there in the proof. Agler has us consider the example proof from above. [Let us first set it up.]

 1 (∀x)[Px→(∀y)(Qx→Wy)] P 2 Pb∧Qb P 3 Pb 2∧E 4 Pb→(∀y)(Qb→Wy) 1∀E 5 (∀y)(Qb→Wy) 3,4→E
(Agler 328)

As we can see, by choosing the substitution 'b' in line 4, we have set ourselves up for the next line that will lead us to the conclusion. So this shows why in many cases we will want to pick a constant that is used previously in the proof.

The second consideration is to pick constants used in the conclusion. Agler has us look at the last part of the same proof from above.

 5 (∀y)(Qb→Wy) 3,4→E 6 Qb→Wt 5∀E 7 Qb 2∧E 8 Wt 6,7→E
(Agler 328)

We know that we need to eventually get Wt in the conclusion, so in line 6, we made a substitution of 't' so that we could eventually derive Wt (Agler 328).

On the basis of these considerations, Agler formulates strategic rule SQ#1(∀E).
SQ#1(∀E): When using (∀E), the choice of substitution instances ‘P(a/x)’ should be guided by the individual constants (names) already occurring in the proof and any individual constants (names) occurring in the conclusion.
(Agler 328)
To illustrate, Agler has us consider the following proof.
(∀x)(Pxa∧Qxa) ⊢ (Paa∧Pba)∧Pma
(Agler 328)
[Let us set it up.]

 1 (∀x)(Pxa∧Qxa) P

[For the conclusion we will need to derive Paa, Pba, and Pma. So we can get those parts establish now.]

 1 (∀x)(Pxa∧Qxa) P 2 Paa∧Qaa 1∀E 3 Pba∧Qba 1∀E 4 Pma∧Qma 1∀E

Now we can use conjunction elimination to get the different parts we need independently.]

 1 (∀x)(Pxa∧Qxa) P 2 Paa∧Qaa 1∀E 3 Pba∧Qba 1∀E 4 Pma∧Qma 1∀E 5 Paa 2∧E 6 Pba 3∧E 7 Pma 4∧E

[And then with conjunction introduction we can build up the formula in the conclusion.]

 1 (∀x)(Pxa∧Qxa) P 2 Paa∧Qaa 1∀E 3 Pba∧Qba 1∀E 4 Pma∧Qma 1∀E 5 Paa 2∧E 6 Pba 3∧E 7 Pma 4∧E 8 Paa∧Pba 5,6∧I 9 (Paa∧Pba)∧Pma 7,8∧I
(Agler  328-329)

8.1.2 Existential Introduction (∃I)

If we have an unquantified expression with a constant, we can substitute a variable for the constant and produce an existentially quantified expression.

 Existential Introduction (∃Ι) From any possible substitution instance ‘P(a/x),’ an existentially quantified proposition ‘(∃x)P’ can be derived by consistently replacing at least one individual constant (name) with an existentially quantified variable. P(a/x) (∃x)P ∃I
(Agler 329)

Agler gives this simple example.
Zr ⊢ (∃x)Zx

 1 Zr P 2 (∃x)Zx 1∃I
(Agler 329)

Agler then gives a proof for:
Pa→Qa, Pa ⊢ (∃y)Qy∧(∃x)Px
(Agler 329)
[Let us set it up first.]

 1 Pa→Qa P 2 Pa P

[We can see that we have the parts we need. First we will isolate the two predicates, then we will convert them to the existentially quantified expressions that we need, and finally we will combine them to get our goal proposition.]

 1 Pa→Qa P 2 Pa P 3 Qa 1,2→E 4 (∃x)Px 2∃I 5 (∃y)Qy 3∃I 6 (∃x)Px∧(∃y)Qy 4,5∧I
(Agler 329)

Agler then clarifies two things when using existential introduction: if there are two or more of the same constants, we can substitute just one, many, or all of them with the variable under existential quantification, and if there are two or more different constants, we can only make the substitutions one constant or for one set of the same constants. To make this apparent, he illustrates with these examples instances.

 1 Laab P 2 (∃x)Lxab 1∃I 3 (∃x)Laxb 1∃I 4 (∃x)Lxxb 1∃I 5 (∃x)Lxxx 1∃I—NO!
(Agler 330)

Excluding line 5, each of the above uses of (∃I) is valid. Notice that at lines 2 and 3, only one individual constant from line 1 is replaced by the use of (∃I). This is acceptable since (∃I) says that an existentially quantified proposition ‘(∃x)Px’ can be derived by replacing at least one individual constant with an existentially quantified variable. Also note that line 4 is correct since while more than one individual constant is being replaced, it is being replaced in a uniform manner. That is, more than one ‘a’ is being replaced by existentially quantified x’s. However, note that line 5 is not correct since the replacement of bound variables is not uniform. At line 5, not only is each ‘a’ being replaced by an x but so is every ‘b.’
(Agler 330)

Agler shows how existential introduction works with English sentences first with this simple example.

 1 Rick is a zombie. P 2 Therefore, someone is a zombie. 1∃I
(Agler 330)

The next series shows how when there are two constants, we can replace either or both.

 1 Alfred loves Alfred. P 2 Someone loves Alfred. 1∃I 3 Alfred loves someone. 1∃I 4 Someone loves him- or herself. 1∃I

 1 Laa P 2 (∃x)Lxa 1∃I 3 (∃x)Lax 1∃I 4 (∃x)Lxx 1∃I
(Agler 330)

Now with the help of English sentences Agler will illustrate why constants need to be replaced consistently.

 1 Alice loves Bob. P 2 Someone loves themselves. 1∃I—NO!

 1 Lab P 2 (∃x)Lxx 1∃I—NO!
(Agler 331)

However, this situation can be correctly converted by using a second instance of existential introduction.

 1 Lab P/(∃x)(∃y)Lxy 2 (∃x)Lxx 1∃I—NO! 3 (∃x)(∃y)Lxy 2∃I
(Agler 331)

Agler then shows some more mistaken uses of existential introduction.

 1 Lab P 2 Lab→Rab P 3 (∃x)Lxx 1∃I—NO! 4 (∃x)Lax→Rab 2∃I—NO! 5 (∃x)Lax→(∃x)Rax 4∃I—NO!
(Agler 331)

When we use existential introduction, the substitution applies to whole formulas. So this is why lines 4 and 5 are incorrect. [Suppose line 2 were Laa→Raa. Is one possible derivation: (∃x)Lax→Rax? And what about: (∃x)Lax→Rab? I am not exactly sure if when we are dealing with a formula like this, if we would need to substitute for all the same constants, even though for the example Laa above we did not need to.] We already know from before that 3 is incorrect because it does not consistently make its substitutions, as it substitutes for two difference constants rather than for just one.

Agler will now show more examples to help us use existential introduction strategically. We first consider:
(∀x)(Px∧Rx), Pa→Wb ⊢ (∃y)Wy
(Agler 331)
[Let us first set it up.]

 1 (∀x)(Px∧Rx) P 2 Pa→Wb P/(∃y)Wy

[We know that we need (∃y)Wy. So we need a W+constant. We see that we have Wb as the consequent in line 2. So we just need Pa to derive it. We could derive Pa by first substituting the variables in line 1 and then using conjunction elimination. So let us do that.]

 1 (∀x)(Px∧Rx) P 2 Pa→Wb P/(∃y)Wy 3 Pa∧Ra 1∀E 4 Pa 3∧E 5 Wb 2,4→E 6 (∃y)Wy 5∃I
(Agler 332)

With this in mind, Agler formulates strategic rule SQ#2(∃I):
SQ#2(∃I): When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such that a use of (∃I) will result in the desired conclusion.
In other words, if the ultimate goal is to derive ‘(∃x)Px,’ aim to derive a substitution instance of ‘(∃x)Px,’ like ‘Pa,’ ‘Pb,’ ‘Pr,’ so that a use of (∃I) will result in ‘(∃x)Px.’
(Agler 332)

Agler says that we can think of this in terms of working backward in the proof. He has us consider this argument again from above.
(∀x)(Px∧Rx), Pa→Wb ⊢ (∃y)Wy
(Agler 332)
We can imagine ourselves placing the conclusion at some indeterminate step at the very end. Then the step before it, we have the predicate with any possible constant that can be substituted for the variable in the conclusion.

 1 (∀x)(Px∧Rx) P 2 Pa→Wb P/(∃y)Wy . . k W(a, ..., v/x) k+1 (∃y)Wy k∃I
(Agler 332)

Agler will show this working-backward method with another example.
(∀x)Px, (∀y)Zy⊢(∃x)(Px∧Zx)
(Agler 332)

 1 (∀x)Px P 2 (∀y)Zy P/(∃x)(Px∧Zx) . . k P(a/x)∧Z(a/x) k+1 (∃x)(Px∧Zx) k∃I
(Agler 332)

[Obtaining the penultimate step is straightforward. We use universal elimination on lines 1 and 2 to obtain the parts for the penultimate step, then we combine them using conjunction introduction.]

 1 (∀x)Px P 2 (∀y)Zy P/(∃x)(Px∧Zx) 3 Pa 1∀E 4 Za 2∀E 5 P(a/x)∧Z(a/x) 3,4∧I 6 (∃x)(Px∧Zx) 5∃I
(333)

Agler has us suppose that for line 4 we instead chose Zb. What would happen then? When we combine lines 3 and 4, we would get Pa∧Zb. Then, we use existential introduction on it, we would not get the conclusion we want. We instead would get (∃x)(Px∧Zb). (Agler 333)

8.1.3 Universal Introduction (∀I)

Universal introduction allows us to go from an expression of the form Pa to one of the form (∀x)P. There are two restrictions. The constant being substituted cannot appear as a premise or as an assumption in an open subproof, and it cannot occur in the derived universally quantified formula.

 Universal Introduction (∀Ι) A universally quantified proposition ‘(∀x)P’ can be derived from a possible substitution instance ‘P(a/x)’ provided (1) ‘a’ does not occur as a premise or as an assumption in an open subproof, and (2) ‘a’ does not occur in ‘(∀x)P.’ P(a/x) (∀x)P ∀I
(Agler 335)

Agler gives the simple example of a proof for:
(∀x)Px ⊢ (∀y)Py
(Agler 335)

 1 (∀x)Px P 2 Pa 1∀E 3 (∀y)Py 2∀I
(Agler 335)

Given the restrictions, we need always to check for these things when using universal introduction.
(1) Does the substitution instance ‘P(a/x)’ occur as a premise or as an open assumption?
(2) Does the substitution instance ‘P(a/x)’ not occur in ‘(∀x)P?’
(Agler 335)
In this example, we should see why we are able to make the universal introduction derivation. [The reasoning seems to be that if we are able to derive Pa from (∀x)Px, then we are saying it holds for all constants. Thus we should also be able to say (∀y)Py, because it says the same thing, namely, that it holds for all constants.]
In this case, it should be somewhat obvious why it is valid to infer ‘(∀y)Py’ at line 3 from ‘Pa’ at line 2. This is because ‘(∀x)Px’ and ‘(∀y)Py’ are notational variants. That is, ‘(∀x) Px’ says that everything in the domain of discourse has the property ‘P,’ and ‘(∀y) Py’ says exactly the same thing. So, if ‘(∀x)Px’ is true, then ‘(∀y)Py’ cannot be false since ‘(∀y)Py’ and ‘(∀x)Px’ are logically equivalent.
(Agler 335)

Agler offers another illustration.
(∀x)(Px∧Rx) ⊢ (∀y)(Py)
(Agler 335d)
[This one is similar to one we did above, only this time we do not compose the conjunction but rather decompose it.]

 1 (∀x)(Px∧Rx) P 2 Pb∧Rb 1∀E 3 Pb 2∧E 4 (∀y)Py 3∀I
(Agler 336)

We notice that 'b' does not occur in the premises or in an open assumption, and it is not in the final line where we perform the universal introduction. Thus we kept within the restrictions (Agler 336). Agler further explains:
Here, ‘(∀x) (Px∧Rx)’ and ‘(∀y)Py’ are not notational variants, but it should be obvious that it is impossible for ‘(∀x)(Px∧Rx)’ to be true and ‘(∀y)Py’ false since ‘(∀x)(Px∧Rx)’ says that everything is both ‘P’ and R, while ‘(∀y)Py’ says that everything is ‘P.’
(Agler 336)

Agler then provides a third example:
(∀x)(Pxc∨Qx), (∀x)¬Qx ⊢ (∀x)(Pxc)
(Agler 336)
[We will need Pxc. We see it in the first premise. Were we to use universal elimination on the first sentence, we would get a formula now with a constant in the first place of the predicate formula. If we also used universal elimination on the second premise, then we could use disjunctive syllogism to obtain the P with two constants. This would allow us to use universal introduction to get the desired conclusion.]

 1 (∀x)(Pxc∨Qx) P 2 (∀x)￢Qx P 3 Pac∨Qa 1∀E 4 ￢Qa 2∀E 5 Pac 3,4DS 6 (∀x)Pxc 5∀I
(Agler 336)

As we can see, we stayed within our restrictions (336). [In the previous simpler proofs, it was immediately obvious why universal introduction was valid. For, it seemed to stay within our intuitions of what it means for a variable to be universally quantified. However, in this case, for some reason, it is less obvious. I am not exactly sure why. If we think of possible interpretations in English, we might have for the premises, "Either everyone loves Corinne or everyone runs quickly. But no one runs quickly." It would not seem to be counterintuitive to then infer, "Therefore, everyone loves Corinne."] But Agler observes it is not obvious why the universal introduction is valid in the above proof. So, he says, we might wonder, "Why is it deductively valid to infer ‘(∀x)P’ from ‘P(a/x)’?" (Agler 336).

To answer this question, Agler first will look at two cases were surely the use of universal introduction is invalid. Suppose we have the statement 'Johnny Walker is a criminal' which we will symbolize Cj. We of course cannot derive from this the statement 'everyone is a criminal' or (∀x)Cx. In the second case we begin with the statement 'someone is a criminal' or (∃x)Cx. From this we of course cannot derive 'everyone is a criminal' (Agler 336).

Agler says that these sorts of uses of universal introduction are often called "hasty generalizations".
That a single specific (or some nonspecific) object has a property does not entail that every object has that property. These two cases are what motivates the two restrictions placed on the use of (∀I). Namely, a use of (∀I) is not valid when it is used to generalize an instantiating constant that names either a single specific individual or some unknown individual. However, a use of (∀I) is valid when the instantiating constant it generalizes is an arbitrarily-selected individual, that is, an individual chosen at random from the domain of discourse.
(Agler 337)

Agler will continue through some more examples. In the first one, our domain is all living humans. We will prove the conclusion 'all men are mortal', symbolized as (∀x)(Mx→Rx). And our premises will be 'all organisms are mortal' or (∀x)(Ox→Rx) and 'all men are organisms' or (∀x)(Mx→Ox). [It seems we cannot just use universal elimination on all of them, giving them all the same constant, then use hypothetical syllogism to get Ma→Ra. And finally use universal introduction to get the conclusion we want. There must be a reason why we cannot do that, but I am missing it at the moment. Instead, Agler will do this proof by making the assumption Ma, then we derive within that subproof Ra. We make that eventual derivation by using universal elimination on the premises and then affirming their antecedents to finally get Ra. Then by using conditional introduction, we get Ma→Ra, from which we derive (∀x)(Mx→Rx) using universal introduction.]

 1 (∀x)(Ox→Rx) P 2 (∀x)(Mx→Ox) P 3 Ma A 4 Oa→Ra 1∀E 5 Ma→Oa 2∀E 6 Oa 3,5→E 7 Ra 4,6→E 8 Ma→Ra 3–7→I 9 (∀x)(Mx→Rx) 8∀I
(Agler 338)

Agler then explains why this conclusion does not violate our restrictions and is also not a hasty generalization.
Note two things. First, line 9 does not violate either of the two restrictions on the use of (∀I), and it is not a hasty generalization because it generalizes not from some specific man and not from some unknown man but from a randomly selected man. We have shown then that if we randomly select a man from the universe of discourse and show that the man has the property of being mortal, we can generalize that every man has that property. This is valid because the generalization depends not on any idiosyncratic property that belongs to a specific man but on a property of all men.
(Agler 338)

For the next example, we will want to prove that no positive, even integer greater than 2 is prime. We know this already. 2 is a prime number. Every even number greater than 2 is divisible by 2. A prime number by definition is one that has no other divisor other than itself and 1. But even numbers greater than 2 are divisible by 1, by themselves, and by 2 (among potentially other numbers). So we know already that this conclusion is true. To prove this, we will first formulate the conclusion as 'no integer that is positive and even and greater than 2 is prime. [We will make 'I' mean 'is an integer', 'Q' mean 'is positive', 'E' mean 'is even', 'G' mean 'is greater than 2' and 'P' mean 'is prime'.] We would formulate this conclusion symbolically as:
(∀x){[(Ix∧Qx)∧(Ex∧Gx)]→¬Px}
(Agler 338)
[So this would be something like, for all x, if x is a positive integer and x is even and greater than 2, then it is not prime. We will do our proof by assuming the antecedent with the intention of deriving the consequent. But the antecedent will not be universally quantified. Rather, we will select a constant.]

 1 | (Ia∧Qa)∧(Ea∧Ga) A
(Agler 338)

[I am not following the next part. Let me quote it first.]
This says to assume that some randomly selected integer ‘a’ from a universe of discourse consisting of positive integers is an even integer greater than 2. From this assumption, the definition of a prime number, and the fact that every even number greater than 2 is divisible by 2, we can infer that this randomly selected integer is not prime.
(Agler 338)

 1 | (Ia∧Qa)∧(Ea∧Ga) A . | . . . | . . . | . . k | ¬Pa .
(Agler 338-339)

[It seems like Agler is saying that through a series of steps, which is guided by the definition of prime number, we can derive ¬Pa. In order to do that, I would think that somehow we would need already to have established that (∀x){[(Ix∧Qx)∧(Ex∧Gx)]→¬Px} or something that somehow functions to say that ¬Pa can be derived somehow from this formula or its parts. I am not sure why Agler leaves out the steps. Perhaps it is too obvious and/or also too lengthy. Or perhaps we are supposed to somehow derive ¬Pa from the definition somehow. But in that case, I do not understand the omitted lines. Or maybe the idea is that we do some sort of indefinite series of checks for each constant to determine semantically the truth value, and maybe furthermore this is some kind of inductive proof. No, probably not. At any rate, after having established this derivation, we can then use conditional introduction to obtain the conclusion structure but with all constants 'a' and then use universal introduction to get the goal proposition.]

 1 | (Ia∧Qa)∧(Ea∧Ga) A . | . . . | . . . | . . k | ¬Pa . k+1 [(Ia∧Qa)∧(Ea∧Ga)]→¬Pa 1–k,→I k+2 (∀x){[(Ix∧Qx)∧(Ex∧Gx]→¬Px} k+1,∀I
(Agler 339)

[Agler then shows this general pattern now with a proof for the Pythagorean theorem. I again do not understand how things work in the omitted parts. But the idea seems to be the following. What we say about some randomly selected constant we can also say about all constants. I am probably missing the idea here, so let me quote.]
As one final example, consider the general structure of Euclid’s proof of the Pythagorean theorem involving a randomly selected right triangle. This is proposition I.47 in Euclid’s Elements. The conclusion of the proof is the following: For every right triangle, the square of the hypotenuse is equal to the sum of the squares of the legs. That is, c2 = a2 + b2. |

We can prove this by beginning with an assumption. That is, assume some randomly selected Euclidean triangle has an angle equal to 90 degrees. That is, assume ΔABC where ∠ACB is 90°. From this assumption, various assumptions about space and lines, and more general facts about triangles in general, we derive the consequence that the randomly selected triangle is such that the square of the hypotenuse is equal to the square of its two sides. After deriving this consequence, a use of ‘→Ι’ is applied in order to exit the subproof, and then the conditional is generalized.
(Agler 340)

 1 | ∠ACB where ∠ACB is 90° A . | . . . | . . . | . . k | c2 = a2 + b2 . k+1 ABC where ∠ACB is 90° → c2 = a2 + b2 1–k,→I k+2 For all right-angled triangles, c2 = a2 + b2 k+1,∀I
(Agler 340)

The proof of the Pythagorean theorem thus involves an inference from an arbitrarily selected individual constant to a universal statement about all right triangles.
(Agler 340)

Agler then has us consider the following incorrect uses of the rule.

 1 Fbc∧Fbd P 2 Fbc 1∧E 3 Fbd 1∧E 4 (∀x)(Fxc) 2∀I—NO! 5 (∀x)(Fbx) 2∀Ι—NO! 6 (∀x)(Fxd) 3∀I—NO! 7 (∀x)(Fbx) 3∀I—NO!
(Agler 340)

[Recall that the first restriction of universal introduction is that the constant being substituted not appear as a premise or as an assumption in an open subproof. In line 4, 'b' is substituted, but it appears in the premise. In line 5 'c' is substituted, but it also appears in line 1. Lines 6 and 7 similarly substitute for 'b' and 'd' respectively, which appear in the premise.] Lines 4 through 7 all break the restriction that we cannot substitute for a constant appearing in the premises or in an open subproof (Agler 340).

Agler then shows a violation of this restriction in terms of a

 1 Fbb P 2 | Raa A 3 | (∀x)Rxx 2∀I—NO!
(Agler 340)

Here, the assumption is still open, as we have not discharged it yet. So we cannot substitute for 'a', which appears in that open assumption. But, if we do discharge the assumption, we can use universal introduction and substitute for the constant that appears in that discharged assumption. Agler shows that in this example.

 1 Fbb P 2 | Raa A 3 | Raa∨Pa 2∨I 4 Raa→(Raa∨Pa) 2–3→I 5 (∀x)[Rxx→(Rxx∨Px)] 4∀I
(Agler 341)

[Recall that the second restriction was that the constant being substituted cannot appear in the final universally quantified formula.] Agler now gives some examples that break the second restriction on universal introduction and then in the final line shows an instance that does not break the second restriction.

 1 (∀x)(Pxx→Lb) P 2 Pcc→Lb 1∀E 3 (∀y)(Pyc→Lb) 2∀I—NO! 4 (∀y)(Pcy→Lb) 2∀I—NO! 5 (∀y)(Pyy→Lb) 2∀I
(Agler 341)

Agler then gives us his strategic rule SQ#3(∀I)
SQ#3(∀I): When the goal proposition is a universally quantified proposition ‘(∀x)P,’ derive a substitution instance ‘P(a/x)’ such that a use of (∀I) will result in the desired conclusion.
(Agler 341)

Agler illustrates with a proof whose goal proposition is (∀x)(Rxx→Rxx).

 1 | Rcc A 2 | Rcc 1R 3 Rcc→Rcc 1–2→I 4 (∀x)(Rxx→Rxx) 3∀I
(Agler 341)

This example shows us that in order to get the goal proposition, which has the form (∀x)P, we needed to first obtain one of the form P(a/x). We can see then in the next example how by deriving a form with the universal quantifier in it will not help us arrive upon our goal proposition.

 1 | (∀x)Rxx A 2 | (∀x)Rxx 1R 3 ∀x)Rxx→(∀x)Rxx 1–2→I
(Agler 341)

[Recall the strategic rule for existential introduction: "SQ#2(∃I): When using (∃I), aim at deriving a substitution instance ‘P(a/x)’ such that a use of (∃I) will result in the desired conclusion".] Just like with the strategic rule for existential introduction, we for this rule also think backwards from the conclusion (Agler 342). He has us consider this proof.
⊢ (∀x)[Rxx→(Sxx∨Rxx)]
(Agler 342)

[By working backward, we know that we will want to derive Raa→(Saa∨Raa) so that from it we can derive (∀x)[Rxx→(Sxx∨Rxx)] by means of universal introduction. As we have no premises, we will need to do this by means of assumptions in subproofs. Given it is a conditional, if we assume the antecedent and can derive the consequent, we can then derive this formula in the main proof. Suppose we assume Raa. How might we derive from it Saa∨Raa? We see that Raa, which we assumed is also in the disjunction. So by using disjunction introduction, we can disjoin to Raa any arbitrary formula. So we will disjoin to it Saa, which will allow us to eventually derive the goal proposition, as seen in these steps below.]

 1 | Raa A/Saa∨Raa 2 | Saa∨Raa 1∨I 3 Raa→(Saa∨Raa) 1–2→I 4 (∀x)[Rxx→(Sxx ∨ Rxx)] 3∀I
(Agler 342)

8.1.4 Existential Elimination (∃E)

[For existential elimination, we begin with an existentially quantified expression. We then assume that predicate with some constant which has not appeared in the premises or in an active proof or subproof. We then derive some other proposition that does not contain this constant, and we derive that new proposition in the main proof.]

 Existential Elimination (∃E) From an existentially quantified expression ‘(∃x)P,’ an expression ‘Q’ can be derived from the derivation of an assumed substitution instance ‘P(a/x)’ of ‘(∃x)P’ provided (1) the individuating constant ‘a’ does not occur in any premise or in an active proof (or subproof) prior to its arbitrary introduction in the assumption ‘P(a/x),’ and (2) the individuating constant ‘a’ does not occur in proposition ‘Q’ discharged from the subproof. (∃x)P    | P(a/x)    | .    | .    | .    | Q Q ∃E
(Agler 342)

Agler provides this simple example.

 1 (∃x)Px P 2 | Pa A/∃E 3 | (∃y)Py 2∃I 4 (∃y)Py 1,2–3∃E
(Agler 343)

Recall that existential elimination has two restrictions. We might ask:
(1) Does the individuating constant ‘a’ occur in any premise or in an active proof (or subproof) prior to its arbitrary introduction in the assumption ‘P(a/x)’?
(2) Does the individuating constant ‘a’ occur in proposition ‘Q’ discharged from the subproof?
(Agler 343)
As we can see, the above example does not violate either restriction, so the inference is valid.

Agler gives another example.
(∃x)Px ⊢ (∃x)(Px∨¬Mx)
(Agler 343)

 1 (∃x)Px P 2 | Pa A/∃E 3 | Pa∨¬Ma 2∨I 4 | (∃x)(Px∨¬Mx) 3∃I 5 (∃x)(Px∨¬Mx) 1,2–4∃E
(Agler 343)

This example above shows a valid use of the rule.

Agler now has us consider this one. [(∃z)(Wz∧Mz)⊢(∃z)Wz∧(∃z)Mz. As we can see, first we will use existential elimination in an assumption to get the conjunction with constants. Then we decompose it into its constituent parts. From each of these we use existential introduction to derive existentially quantified formulas with variables z. We then combine them with conjunction introduction, and derive the goal proposition in the main proof using existential elimination.]

 1 (∃z)(Wz∧Mz) P 2 | Wa∧Ma A/∃E 3 | Wa 2∧E 4 | Ma 2∧E 5 | (∃z)Wz 3∃I 6 | (∃z)Mz 4∃I 7 | (∃z)Wz∧(∃z)Mz 5,6∧I 8 (∃z)Wz∧(∃z)Mz 1,2–7∃E
(Agler 343-344)

Treating ‘Wx’ = x is white and ‘Mx’ = x is a moose, line 1 reads, Something is a white moose. From this proposition, Something is white, and something is a moose is inferred.
(Agler 344)

Agler will now examine some incorrect uses of the existential elimination that are invalid because they violate one of the restrictions. The following example will break the first restriction.

 1 Ea→(∀x)Px P 2 (∃x)Ex P 3 | Ea A 4 | (∀x)(Px) 1,3→E 5 (∀x)Px 2,3–4∃E—NO!
(Agler 344)

Note that 'a' occurs in the first premise. There is nothing wrong with us assuming it in line 3. But there is something wrong in using existential elimination to extract (∀x)Px from the subproof, because (∀x)Px was made by substituting for the 'a' which occurred in the premise (Agler 344). Agler illustrates why this is invalid by using English sentences.

 1 If 3 is even, then every number is prime. P 2 Some number is even. P 3 . 4 . 5 Therefore, every number is prime. 2,3–4 ∃E—NO!
(Agler 344-345)

Agler now gives an example that does not violate this restriction.

 1 (∃x)(Pxx∧Qxx) P 2 | Paa∧Qaa A/∃E 3 | Qaa 2∧E 4 | (∃x)Qxx 3∃I 5 (∃x)Qxx 1,2–4∃E
(Agler 345)

But in the following case, restriction 1 is violated.

 1 (∀x)(∃y)Txy P 2 (∃y)Tay 1∀I 3 | Taa A/∃E 4 | (∃x)Txx 3∃I 5 (∃x)Txx 2,3–4∃E—NO!
(Agler 345)

Again, the problem here is that the variable in line 5 is substituting for a constant in the subproof that appeared already in the second premise.

With this in mind, Agler formulates strategic rule SQ#4(∃E):
SQ#4(∃E) Generally, when deciding upon a substitution instance ‘P(a/x)’ to assume for a use of (∃E), choose one that is foreign to the proof.
(Agler 345)
[The second restriction, recall, is that the constant being substituted by the derived formula be not found in that derived formula itself.] Agler will now give an example of a proof that violates the second restriction.

 1 (∃z)(Wzz∧Mz) P 2 | Wbb∧Mb A/∃E 3 | Wbb 2∧E 4 | (∃x)(Wbx) 4∃I 5 (∃x)(Wbx) 1,2–5∃E—NO!
(Agler 345-346)

In line 4, we substituted x for the second constant 'b' in 'Wbb', leaving the first 'b' intact. This on its own is fine, but then we cannot make the derivation in line 5 using existential elimination, because we substituted x for 'b', but 'b' is found in the derived formula too (Agler 346). Agler further explains why this is invalid: "consider that line 1 only refers to there being at least one object z that is both ‘Wzz’ and ‘Mz.’ While we could validly infer ‘(∃x)(Wzz),’ we could never validly infer a specific object constant" (Agler 346). [I did not follow that explanation so well. Perhaps the idea is that we know that some constant holds for W, but we do not know which, so we cannot infer line 5.]

Note also that while the assumption in existential elimination is based on an existentially quantified formula, the formula that we eventually derive would be a universally quantified one, as in this example.

 1 (∀x)Px P 2 (∃x)Zx P 3 | Za A/∃E 4 | (∀x)Px 1R 5 (∀x)Px 2,3–4∃E
(Agler 346)

Agler then has us consider one final proof. [(∃y)Py→(∀y)Ry, (∃x)Px⊢(∀y)Ry. If we could obtain
(∃y)Py then we can derive (∀y)Ry. But there would not be a way it seems to directly convert (∃x)Px to (∃y)Py. However, if we assume a substitution for (∃x)P, then we can use existential introduction to get (∃y)Py. This will allow us to use conditional elimination to get the goal proposition, which we would then place into the main proof.]

 1 (∃y)Py→(∀y)Ry P 2 (∃x)Px P/(∀y)Ry 3 | Pa A/∃E 4 | (∃y)Py 3∃I 5 | (∀y)Ry 1,4→E 6 (∀y)Ry 2,3–5∃E
(Agler 346)

Agler, David. Symbolic Logic: Syntax, Semantics, and Proof. New York: Rowman &; Littlefield, 2013.

Some changes to the book quotations may have been made, as based on: http://markdfisher.com/wp-content/uploads/2014/02/PHIL_012_ONLINE_SYLLABUS_SP14-3-1.pdf .

.