[The following is summary. Boldface (except for metavariables) and bracketed commentary are my own. Please forgive my typos, as proofreading is incomplete. I highly recommend Agler’s excellent book. It is one of the best introductions to logic I have come across.]
David W. Agler
Symbolic Logic: Syntax, Semantics, and Proof
Ch.5: Propositional Logic Derivations
5.3 Intelim Derivation Rules
In a proof we can use derivation rules to derive the conclusion through some number of steps. One set of such rules are introduction and elimination rules, called intelim derivation rules. (introduction-elimination). In some cases we will need to make subproofs that begin with assumed propositions.
Once we have arrived at a derivation that we can use for a rule that makes a derivation in an outer proof, we discharge the assumption (rendering it inoperable) thereby closing the subproof. There can be subproofs within other subproofs. We can use propositions within one layer of proof, so long as it is one unbroken subproof. And we can import into a subproof any propositions from outer layers to the left.
However, we cannot export propositions from a subproof to another separate subproof at the same level, nor can we export a proposition from a subproof into layers to the left of it.
However, there are derivation rules that will allow us to make a derivation in a level of a proof on the basis of what was derived in a subproof one level to the right of it. The intelim derivation rules are summarized in the following chart:
5.3 Intelim Derivation Rules
We look now at the derivation rules for our system of propositional derivations (PD). Agler writes that “for every propositional operator (¬, ∧, ∨, →, ↔), there are two derivation rules: an introduction rule and an elimination rule” (163). For that reason, PD is called intelim system [from introduction-elimination]. Agler writes:
An introduction rule for a particular operator is a derivation rule that introduces a proposition with that operator into the proof.
An elimination rule for a particular rule for a particular operator is a derivation rule that crucially begins (or uses) a proposition with that operator in the proof.
5.3.1 Conjunction Introduction (∧I)
Suppose we have two propositions that already occur in the proof. We can use the conjunction introduction (∧I) derivation rule to combine those two propositions.
Conjunction introduction states that from ‘P’ and ‘Q,’ the complex conjunction ‘P∧Q’ or ‘Q∧P’ can be validly inferred.
To illustrate, we consider the following example argument.
W, Q, R ⊢ W∧R
We first place the premises each with their own line, and for the final one, we indicate the goal proposition, W∧R.
Since we already have given W and R, we can use the conjunction introduction rule to combine them in order to obtain the conclusion. So we apply ∧I to lines 1 and 3.
And as we can see, we write the justification as 1,3∧I.
Recall the formulation: “Conjunction introduction states that from ‘P’ and ‘Q,’ the complex conjunction ‘P∧Q’ or ‘Q∧P’ can be validly inferred” (163). As we can see, P and Q are metavariables. This means that either or both can be complex formula. For example, consider this argument.
A→B, D ⊢ (A→B)∧D
First we set up the premises and goal proposition.
And as we can see, we already have the component parts for the goal conjunction. We just need to use the ∧I rule to combine them.
Agler then has us consider a more complicated argument:
(R↔S), W∧¬T, Z ⊢ [(R↔S)∧(W∧¬T)]∧Z
Let us first place the premises.
As we can see, all the component parts of the goal proposition are found individually as lines among the premises. But we cannot add them altogether in one operation. We need to add them in the order of their nesting. We see that in the goal proposition [(R↔S)∧(W∧¬T)]∧Z the main conjunction operator unites two parts, namely, [(R↔S)∧(W∧¬T)] and Z. So since [(R↔S)∧(W∧¬T)] has nested components, let us begin by constructing it. We see the parts in lines 1 and 2.
Next, we combine that new conjunction with Z from line 3.
And this produces our goal proposition.
Agler then shows how conjunction introduction is found in everyday reasoning. For example, we might conclude that John is angry, and Liz is angry on the basis of knowing individually that John is angry and that Liz is angry.
[The next idea seems to be the following. When using our derivation rules, we are not necessarily concerned with the truth and falsity of the terms. We merely are saying that certain structures can be rendered into certain other structures. This is simply syntactic entailment. However, we might also to be concerned with the validity of the inferences. So we might want to know that the derivations create formulations that cannot be false when the original formulas are true. This is something we can confirm using truth tables and truth trees.]
it is important to note that conjunction introduction, as a derivation form, is a statement about syntactic entailment. That is, the derivation rule is formulated independently of the truth or falsity of the premises. However, while the rules are formulated purely in terms of their structure, the rules we include in PD are guided by semantic concerns. That is, we choose rules that are deductively valid. Given this consideration, we can check whether or not we should include a given derivation rule into PD using a truth table or truth tree.
In the case of ‘∧I’, consider the following truth table:
The above table shows that for ‘P, Q ⊢ P∧Q,’ in no truth-value assignment is it possible for the premises ‘P’ and ‘Q’ to be true and the conclusion ‘P∧Q’ false. | Alternatively, consider a truth tree for the same argument:
The above tree shows that the premises and the negation of the conclusion produces a closed tree and is therefore valid.
[Note, in my version of the text, the justification for the truth tree is given as 3→D.] [Also, for the tree test above, recall from section 4.6.5 that we can test for validity by making a tree where we keep the premises and make the conclusion false. If this produces a closed tree, that means there are no assignments which make it such that the premises are true and the conclusion false, and thus it is valid.]
5.3.2 Conjunction Elimination (∧E)
With conjunction elimination, we can start with a conjunction and derive from it either conjunct.
Conjunction elimination (∧E) is a derivation rule that allows for deriving either conjunct of a conjunction. In other words, it is a derivation rule that eliminates a conjunction by inferring either of the conjuncts.
We can only use conjunction elimination on formulas whose main operator is a conjunction. Thus it will not work on (P∧Q)∨R. We will now examine the use of this rule in the case of:
First we will place it as a proposition.
We then use ∧E to extract both conjuncts.
Finally we apply it again for the left conjunct that ended up on line 2.
And again, ∧E is formulated using metalinguistic variables, which means that it can apply to complex formulas, as we can see for the proof of:
(A→B)∧(C∧D) ⊢ D
So let us place the premises on the first line with the goal proposition noted in the justification column.
We first use conjunction elimination to draw out the right conjunct.
Then we use it again to draw out the right conjunct from that derived proposition.
And this gives us our conclusion.
Agler then notes how conjunction elimination is an operation we use in everyday reasoning. He gives this illustration:
John is in the park, and Mary is in the subway. Therefore, John is in the park.
Agler then notes that there are semantic reasons that we include conjunction elimination in PD. [I am not sure I follow the demonstration so well. It seems that he will make a tree showing that the following argument is valid: P∧Q ⊢ P. And he will do so by assuming the negation of the conclusion and by using a truth tree to test to see if the tree is closed. That would mean the inference is valid, and thus for semantic reasons this syntactical rule is affirmed.]
In addition, the choice to include conjunction elimination in PD is guided by semantic concerns. Thus, a truth table or truth tree is capable of showing that ‘∧E’ is deductively valid.
Agler also notes that in our reasoning we often use more than one derivation rule. He illustrates with the proof for the following.
R∧B, D ⊢ (B∧D)∧R
So let us first set the premises, with the goal proposition indicated.
We see that we need to take B from line 1 and conjoin it with D from line 2. Then we need to conjoin that disjunction with R from line 1. So first we need to decompose the conjunction from line 1 so that we have all the parts we need. Then we need to combine them to obtain the conclusion. So let us first use conjunction elimination to get the two conjuncts from line 1.
To get the left conjunct of the conclusion, we will conjoin B and D using conjunction introduction.
Then we will conjoin that conjunction with R, to produce our goal proposition.
In the above proof, it was necessary to make use of conjunction elimination to break the complex propositions down into their component parts and to use conjunction introduction on key atomic propositions to build up the desired complex proposition.
5.3.3 Assumptions (A) and Subproofs
We turn now to the notion of an assumption (A). We might assume a proposition as true for the purpose of making the proof. Agler first shows us how it can work in an everyday argument. In this case, we assume God’s existence, then find an absurdity, which leads us in the end to conclude God does not exist.
Consider, for example, a case where you and a friend disagree about whether or not God exists. You think that God does not exist, and your friend thinks that God does exist. In order to persuade your friend, you might reason as follows:
Let’s assume that God does exist. If God is as great as you say he is, then there should be no poverty or suffering in the world. But there is suffering in the world. This is inconsistent! Therefore, God does not exist.
The above argument does not start by simply asserting, God exists. It instead begins by assuming that God exists and then, given this proposition and others, involves a line of reasoning on the basis of this assumption.
Agler says that we can view the above argument as involving two separate but integrated proofs, namely, a main proof, and a subproof whose conclusion is used in the main one.
There is the main proof, which aims at the conclusion God does not exist, and there is a subproof, which aims to show that anyone who assumes (or believes) that God does exist is forced into believing something that cannot be the case, namely, something inconsistent (i.e., ‘There is both suffering and no suffering in the world’).
[Note, in my version of the text, there is no number for the fifth line “There is suffering”. I added it here to make the diagram resemble others that follow, but perhaps it is not supposed to be there.]
We stick with this debate, now turning to our friend’s argument. She says:
Let’s assume that God does not exist. If God does not exist, then the universe just magically came into existence out of nothing. But the universe did not come into existence from nothing. Therefore, you are contradicting yourself in saying that God does not exist and the world exists. Therefore, God does exist.
We indicate assumptions in the justification column by writing ‘A’. We also indent the subproof lines and draw a descending line from where the subproof starts to where it ends. Agler gives an example.
While working within a subproof, we can use derivation rules. We can also nest another assumption and subproof within the first subproof. This could look something like:
In the example above, the proof starts with ‘Q’ in the main line of the proof. Next, at line 2, ‘S’ is assumed, which begins the first subproof (call this subproof1). Note that subproof1 is not simply an independent proof but is part of a proof that is built on the main line of the proof. Building one part of the proof on another in this way is called nesting. In the above example, the main line of the proof nests subproof1 (or, alternatively, subproof1 is in the nest of the main line) since subproof1 is built on the main line. In addition, subproof1 is more deeply nested than the main line since the main line nests (or contains) subproof1.
Next, at line 3, ‘W’ is assumed, which begins a second subproof (call this subproof2). Again, notice that the assumption that begins subproof2 is not independent of subproof1. Instead, subproof2 is built upon subproof1 and the main line of the proof. This means that the main line nests subproof1, and subproof1 nests subproof2. In addition, subproof2 is more deeply nested than both subproof1 and the main line of the proof.
Here is a graphical representation:
Agler notes that the assumptions are built upon propositions already given in the proof. He has us think of the nested subproof structure as it might occur in a conversation:
We agree that ‘Q’ is true.
Given that ‘Q’ is the case, assume ‘S.’
Given that ‘Q’ is true and our assumption ‘S,’ assume ‘W.’
However, often when making an assumption in a proof, instead of making another assumption that depends on the first, it is better to make a different assumption that does not depend on the first. Agler has us think of the following conversation taking place.
We agree that ‘A’ is true.
Given the truth of ‘A,’ assume ‘B.’
Given the truth of ‘A,’ assume ‘C.’
As we can see in the proof structure below,
there are two subproofs, but neither subproof nests the other. That is, the subproof beginning with ‘C’ at line 4 is not built upon the subproof beginning with ‘B’ at line 2. It is a separate subproof.
Agler then addresses a very important point. If a proposition is outside a subproof (to its left), then we can use that outside proposition within the subproof. But whatever propositions we use inside the subproof cannot be used (to the left) outside it.
Finally, another important feature of reasoning within subproofs is that propositions outside the subproof can be used inside the subproof, but propositions inside the subproof cannot be used outside the subproof. In the language of nests, a proposition less nested can be used in a more deeply nested part, but a proposition more deeply nested cannot be used in a less deeply nested part.
It is perhaps easiest to see this graphically:
So we will look at some examples. The first one will break the rule of using propositions across breaks or gaps between subproofs.
Here there are two separate subproofs, one in lines 2-3, and the other in lines 4-5. In line 5, we erroneously use a proposition from the first subproof along with a proposition in the second subproof. The next example will break the rule that says we cannot take a proposition in a subproofs and use it to the left in the higher-order proof.
As we can see, we used the subproof propositions B∧C and B in the main proof lines 4 and 5. Agler writes that this is like saying, “Assume that I am the greatest person in the world; therefore, I am the greatest person in the world” (173).
But, although we cannot use a proposition from inside a subproof outside that subproof, we can use a proposition from outside the subproof inside it. Agler offers this example,
Agler ends by noting that on account of these rules, it would seem pointless to make subproofs. For, whatever we accomplish within them is useless in the higher-order proofs outside them. However, there are in fact other rules that will allow us to infer propositions outside a subproof based on what is established inside them. We turn now to the first such rule, conditional introduction.
5.3.4 Conditional Introduction (→I)
Conditional introduction (→I) will allow us derive a conditional outside a subproof on the basis of an assumption within the subproof. “The derived conditional consists of the assumed proposition as the antecedent and the derived conclusion in the subproof as the consequent” (174).
Agler illustrates with this example argument:
Q ⊢ P→Q
Let us first place the premise on the first line and the conclusion in the justification column.
[Note, in my version of the text, there is just a Q for the goal proposition.] [So we need introduce an atomic proposition P that is not among the premises and make it the antecedent to Q as the consequent. Conditional introduction will allow us to do the following. We make an assumption in a subproof. Then we use derivation rules to obtain another formulation in that subproof. Then we draw out from that subproof a proposition with the assumption as the antecedent and the final derived proposition as the consequent. This means that we will want in our subproof to assume P, since we will want it to be the antecedent, and we will want to derive Q, as we will want it to be the consequent of our resulting conditional. So let us first assume P.]
[At this point I am confused. Even in the next example something like the following will happen. We said already that we we can use any propositions in the main proof in the subproof. So at this point, why not just place Q next? Instead, we will make P∧Q using the conjunction introduction rule. (Later we learn the rule that allows a proposition to be repeated, so until then we perhaps need this more complicated method.)]
[Then we will use conjunction elimination to create Q on its own.]
[We now have derived Q in our subproof which began by assuming P, and that means we can extract P→Q in our main proof, which was the goal proposition. In the justification we note that we obtained this from all the lines of the subproof.]
Agler has us “Notice that conditional introduction begins by assuming ‘P’ (the antecedent of the conclusion); then ‘Q’ (the consequent of the conclusion) is derived in the subproof; finally, a conditional ‘P→Q’ is derived outside the subproof” (174). After we are done with our work within a subproof, we consider the subproofs assumption as no longer in effect, and we say that the assumption has been discharged. But before we have arrived at the point where we discharge the assumption, the subproof is considered an open subproof, and it is a closed subproof after the assumption is discharged.
When a subproof is completed, we say that the assumption of the subproof has been discharged. That is, the force or positive charge of the assumption, as well the corresponding subproof in which it occurs, is no longer in effect. In addition, we call a subproof with an undischarged assumption an open subproof and a subproof with a discharged assumption a closed subproof.
Agler will now show another proof using conditional introduction, but this time he will walk through it step by step. The argument is:
S→D ⊢ S→(S→D)
First we set it as a premise, and we note the goal proposition in the justification column.
Agler then writes:
Next, notice that our goal conclusion is a conditional: ‘S→(S→D).’ We might be able to derive this conditional by using ‘→I,’ where we assume the antecedent of the conclusion in a subproof and derive the consequent of the conclusion in that same subproof. Thus, start by assuming ‘S’ and making ‘S→D’ our goal for the subproof:
We next need to derive S→D in our subproof. [As we saw in the prior example, we can do that by using conjunction introduction to take it from the first line and combine it with the assumed S.]
[Then, we derive S→D from line 3 by using conjunction elimination.] [Note, in my version of the text, the justification column for line 2, in this and in the next diagram, reads A→D.]
As we see from the justification of line 2, we have arrived upon our goal proposition, S→D. This means “we can discharge the assumption (close the subproof) by using ‘→I’:”
[The next point seems to be the following. Suppose we have a subproof within a subproof within a main proof. And suppose further that in the most nested, that is, the rightmost subproof, we derive a proposition from an assumption. The question is, can we use use conditional introduction to derive a proposition in all proofs outside that subproof, or just the one most immediately to its left? The answer seems to be we can only derive it in the next most outer layer of proofing.
When working with multiple subproofs, it is important to realize that the use of conditional introduction only allows for deriving a conditional out of the subproof containing the assumption and the derived proposition. In short, you can only use conditional introduction to derive a conditional out of one subproof. Consider the following proofs:
[In the first one, we will properly use the conditional introduction rule. So we begin with premise P.
We then assume R.
Now, under this subproof we start another nested subproof, this time assuming Z.
We then use conjunction introduction to make Z∧R.
As before, we then use conjunction elimination to isolate one conjunct, in this case, R.
By doing so, we have created the conditions for using conditional introduction to obtain Z→R. We will place it in the subproof nested immediately outside this one.
Here “‘Z→R’ is properly derived out of the subproof where ‘Z’ is the assumption” (176). But what if in the last step we derived Z→R not in the nearest outside subproof, but instead jumping over a subproof level and placing Z→R in the main proof? Agler says this is not permitted.
Here, “‘Z→R’ is improperly derived not only out of the subproof where ‘Z’ is the assumption but also the subproof involving ‘R’” (Agler 176).
Agler then gives a final example for conditional introduction:
A ⊢ B→[B∧(C→A)]
[So first let us set A as the premise, and note the goal proposition in the justification column.
At this point, I am not sure why we assume B next. I notice that we have a conjunction that we need to derive: B∧(C→A). But what we will conjoin will not all be parts that are given in the main proof. However, we do not have a rule for extracting a conjunction out of a subproof. We can only extract conditionals at the moment. This means I suppose that in our main proof, we will need to extract the conditional B→[B∧(C→A)]. This means that the first level of subproofing will need to begin by assuming B and deriving B∧(C→A). The way we would do that it seems is by deriving C→A from yet another subproof. Then, in the middle subproof, we will accomplish two things. First we will derive B∧(C→A) by making the conjunction of the assumed B with the C→A derived from the sub-subproof. We as well use that original assumed B and the derived C→A to extract into the main proof the conditional B→[B∧(C→A)]. So we first will assume B.
Since we are going to conjoin that B to C→A, we need to get C→A into this subproof. We will do that by assuming C in a sub-subproof.
We use the technique we have been employing. So next we use conjunction introduction.
After which we use conjunction elimination. This gives us the terms of the conditional that we will extract.
We then extract the conditional C→A into the subproof.
We now have the contents we need to derive B∧(C→A).
Then with both B as our assumption and this derivation in our subproof, we can extract B→[B∧(C→A)] in the main proof.]
Agler has us
Notice that the above proof involves an auxiliary assumption ‘C,’ and while propositions can be brought into the subproof, any proposition involving ‘C’ cannot be used outside the subproof until it has been discharged with ‘→I.’
5.3.5 Conditional Elimination (→E)
We might already be familiar with modus ponens, where we infer “the consequent of a conditional by affirming the antecedent of that conditional” (176). In our derivation rules, this operation is conditional elimination (→E).
Agler illustrates with the proof for this argument:
(A∨Β)→C, A, A∨B ⊢ C
So let us first set up our premises and goal proposition.
As we can see, we can use conditional elimination right away on lines 1 and 3.
Agler notes that we cannot use conditional elimination on lines 1 and 2, because the rule will only apply when we have the entire antecedent of the conditional.
Agler then offers another example:
A→B, A, B→C, C→D ⊢ D
So let us start by setting up the premises and goal proposition.
We want to obtain D. We see in line 4 that we could obtain it, if we had C. We see in line 3 that we could obtain C, if we had B. And in line 1 we see that we could obtain B if we had A. And in line 2 we see that we have A. So we start by using conditional elimination on lines 1 and 2.
This gives us B, which we can use with line 3.
That gives us C, which we can use with line 4.
And that gives us D, which is what we wanted (177).
5.3.6 Reiteration (R)
Agler writes: “Reiteration (R) is one of the most intuitive derivation rules. It allows for deriving any proposition that already occurs as a premise or as a derived proposition in the proof” (177).
Agler illustrates first with a very simple instance. We begin by placing these two premises.
Then in the next line, we simply repeat A from line 2.
We can see this derivation at work in a more complicated example.
Agler notes two restrictions. The two things we cannot do are:
1. We cannot reiterate a proposition from a more deeply nested part of the proof into a less deeply nested part of the proof. And
2. We cannot reiterate a proposition from one part of the proof into another part that is not within its nest.
(Agler 178, with “We cannot” added)
So let us look closer at restriction 1, which again is that we cannot “reiterate a proposition from a more deeply nested part of the proof into a less deeply nested part of the proof” (178). Agler says another way to say this is that “it is not acceptable to reiterate a proposition out of a subproof, but it is acceptable to reiterate a proposition into a subproof” (178). He gives this example. Suppose we have done our proof in this matter, up to this part.
So we assumed W in the sub-subproof. Can we reiterate it in the subproof?
No, that goes against the first restriction for the reiteration derivation rule. So for the same reason, we cannot reiterate The S in the main proof.
Agler illustrates why this is invalid by giving the following argument:
Agler explains, “This argument is clearly invalid since from the assumption that I am the richest person in the world, it does not follow that I am the richest person in the world” (178).
We turn now to the second restriction on reiteration, which was that we cannot “reiterate a proposition from one part of the proof into another part that is not within its nest” (178). He illustrates with an example. Suppose that we have made one subproof. Then after it, we start an entirely new subproof.
The question is, can we take the S which was assumed in the other subproof?
No, according to our restriction, we cannot.
Agler final point about reiteration is that it is a derived rule. [By this I think he means that the validity of the rule is guaranteed on the basis of derivations made by other rules. So it is something like a shorthand for those other operations. I will quote.]
Finally, it is also important to note one distinguishing feature of reiteration, namely, that it is a derived rule. This means that any use of reiteration is somewhat superfluous since the inference that it achieves can be achieved using the existing set of derivation rules. To see this more clearly, consider the proof of the valid argument ‘R ⊢ R.’
Although the introduction of reiteration into our set of derivation rules is not essential, it is extremely convenient since the proof above can be simplified into the following proof.
[Note. Previously I had some confusion about certain subproof operations where in order to derive a term that was already in the main proof, we first used conjunction introduction to introduce it as a conjunct, and then secondly extracted it from that conjunction using conjunction elimination. My concern was that I saw no restriction stopping us just from re-introducing it outright, without first conjoining it to another term then extracting it. There is one exercise where Agler just introduces the term from the main proof into the subproof simply using the reiteration derivation rule.
I suspect that the reason we sometimes see the more complicated proceedure is because we previously did not have the reiteration rule, so we used other means to produce the proposition that already existed in the main proof.]
5.3.7 Negation Introduction (¬I) and Elimination (¬E)
Agler turns now to two derivation rules involving negation, namely, negation introduction (¬I) and negation elimination (¬E). He notes that “proofs involving these two forms are sometimes called indirect proofs or proofs by contradiction” (181).
So for negation introduction, we assume P, then later in the subproof we find Q and ¬Q. [Semantically speaking, this inconsistency tells us that P cannot be true, and thus that ¬P is true.] So from this subproof we derive ¬P in the outer proof.
Agler then summarizes the way these rules are used effectively in making proofs.
The basic idea in using ‘¬I’ and ‘¬E’ is to (1) assume a proposition in a subproof, (2) derive a contradiction, and (3a) derive the literal negation of the assumed proposition outside the subproof, or (3b) derive the unnegated form of the assumed proposition outside the subproof.
He illustrates with the following argument.
A→D, ¬D ⊢ ¬A
Let us first set up the premises and goal proposition.
[As we can see, in order to obtain ¬A, we will need to find a contradiction beginning with the assumption A.
By using conjunctive elimination and reiterating D, we can find such a contradiction.
That means by means of negation introduction, we can conclude ¬A.
Agler now will illustrate negation elimination with the following argument:
(A→D)∧A, ¬D ⊢ C
[So let us begin by setting up the premises and goal proposition.]
This will all seem odd, because we want to conclude something that is not even mentioned in the premises. At any rate, so long as we find a contradiction, we will be able to do that. Since we want to conclude C, we should assume ¬C, then find a contradiction, and then we can derive C.
By extracting the terms in the first line, we will be able to find a contradiction with the second line. So let us use conjunction elimination to decompose it.
Now we can use conditional elimination to get D.
We will reiterate ¬D, to show its contradiction to D.
This allows us to conclude C. While this might seem odd, it is perhaps like the principle of explosion. From any contradiction we can derive any formula we want. So in this case, we have (A→D)∧A and ¬D. We can decompose the first proposition into A→D and A. From this we can derive D, which contradicts the other premise ¬D. So since there is a contradiction among the premises, we can derive any arbitrary formula, C being one possibility.]
5.3.8 Disjunction Introduction (∨I)
Disjunction introduction (∨I) will allow us to begin with a proposition, then add to it disjunctively any other proposition.
Disjunction introduction (∨I) is a derivation rule whereby a disjunction is derived from a proposition (atomic or complex) already occurring in the proof. Disjunction introduction states that from a proposition ‘P,’ a disjunction ‘P∨Q’ or ‘Q∨P’ can be derived. Here is the derivation rule for disjunction introduction.
Agler illustrates with this example:
So let us first set up the premises and goal proposition.
We see that we will just need to build this up using conjunction introduction and disjunction introduction. So let us first construct the disjunctions, which will serve finally as the two conjuncts.
And let us conjoin them.
Agler then provides a more complex example:
[P∨(Q∨R)]→W, R ⊢W
So let us set up premises and goal proposition.
As we can see, we can build up the antecedent of line 1 using disjunction introduction. Then once we have it, we can derive W using conditional elimination. So let us build up one part.
Now the next.
And with it, we can derive W from line 1.
Agler gives one final illustration.
P, (P∨¬W)→R ⊢R∨¬(Z∨Q)
Let us set up the premises and goal proposition.
At this point we can see that if we first obtained R, then we can use disjunction introduction to disjoin ¬(Z∨Q) to it. And we can get R by using conditional elimination. To do that, we first would need to use disjunction introduction to obtain the antecedent in line 2. So let us do that first.
Now that we have the antecedent, we can use conditional elimination to get R.
And now that we have R, we can disjoin ¬(Z∨Q) to it.
5.3.9 Disjunction Elimination (∨E)
[Disjunction elimination will allow us to derive a third proposition on the basis of a disjunction of two other propositions. Recall that for a disjunction to be true both disjuncts must be true. It cannot be that both are false. With disjunction elimination, we will make subproofs. One subproof begins by assuming the first disjunct, and it derives some third proposition (that is neither one of the disjuncts). We then start a separate subproof. This one begins by assuming the other disjunct, and it will derive that same third proposition. Now, since it cannot be that both disjuncts are false, it must be that at least one is true (or both). This means that that regardless of which disjunct is true, we can be sure that the third proposition is as well.]
Disjunction elimination (∨E) is a derivation rule such that from a disjunction ‘P∨Q,’ a proposition ‘R’ can be derived, provided that the proposition can be derived from two separate subproofs where each subproof begins with one of the disjunction’s disjuncts (i.e., one with ‘P’ and one with ‘Q’). Here is the basic form for disjunction elimination.
In the above argument form, each of the disjuncts from ‘P∨Q’ is separately assumed. From within both of these subproofs, ‘R’ is derived by some unspecified form of valid reasoning. If ‘R’ can be derived in both subproofs, then ‘R’ can be derived from ‘P∨Q.’
Agler gives the following example:
P∨Q, P→R, Q→R ⊢ R
[So let us set up the premises and goal proposition.]
We see that we have the disjunction. And we see that there are conditionals that have the prior disjuncts as the antecedents and the goal proposition as the consequent. So we know that if we assume each disjunct independently, we will be able to derive the goal proposition in each case. Then we can use disjunction elimination to derive the goal proposition. So first we assume the first disjunct P.
That allows us to derive R.
Then we assume the other disjunct Q.
That will allow us to derive R also.
Since both disjuncts allow us to derive R in the subproofs, we can derive R in the main proof.
Agler gives another example:
P∨(Q∧T), P→T ⊢ T
So we set it up as usual.
At first the solution may not be obvious. What we see is that if we assume P, then we can get T. So the question is, if we assume Q∧T, can we also get T? Yes, we can by conjunction elimination. So let us begin by assuming P.
Then using conditional elimination, we can derive T.
Now, let us assume Q∧T.
As we noted, we can then derive T again.
And since we derived T from each disjunct, we can derive it in the main proof from disjunction elimination.
Agler then gives this final complex example:
S→W, M→W, P∧(R∨T), R→(S∨M), T→(S∨M) ⊢ W
[So let us set it up.
We will want to derive W. We see from line 1 and line 2 that if we derive either S or M, we can derive W. We then see S∨M in line 4 and 5. If we derive either R or T, we can then derive S∨M. We then see in line 3 that we can derive R∨T. So let us start there. We will derive it using conjunction elimination.
Now we will derive S∨M. First we assume R.
By implication elimination on line 4, we can then derive S∨M.
Now let us suppose T.
By implication elimination on line 5, we can then also derive S∨M.
Now we can use disjunction elimination to derive S∨M in the main proof.
We can now assume S.
Which allows us to derive W.
And we then now assume M.
This again allows us to derive W.
And now finally we can derive W in our main proof using disjunction elimination.
Agler says that disjunction elimination is one of the most complicated derivation rules to master. He also says that it might not be apparent how it relates to normal reasoning. So he gives an example. [This illustration will be very helpful. We might think of win-win or lose-lose situations.]
Suppose that you wanted to show that John will have a great evening follows from John will either go to the party or stay home. In order to show this, you need to show both that if John goes to the party, he will have a great time and that if John stays home, he will also have a great time. The reason that it needs to follow from both disjuncts is because if John will have a great evening only if he goes to the party and not if he stays home (and vice versa), then it is possible for the premise John will either go to the party or stay home to be true, and the conclusion John will have a great evening to be false.
5.3.10 Biconditional Elimination and Introduction (↔E and ↔I)
In biconditional elimination, we make two subproofs. We suppose a proposition P in the first and derive Q. Then we assume Q in the second subproof and derive P. This allows us to derive P↔Q in the main proof.
Agler illustrates with this example:
P→Q,Q→P ⊢ P↔Q
[Let us begin by setting up the proof.]
With what we know about biconditional introduction, the path is obvious. We will assume P to derive Q. And we will assume Q to derive P. This will allow us to derive P↔Q in the main proof. So we can see this step by step. We assume P.
And we derive Q.
Then we assume Q.
And we derive P.
This allows us to derive our goal proposition, using biconditional introduction.]
In biconditional elimination (↔E), “from ‘P↔Q’ and ‘P,’ we can derive ‘Q.’ And from ‘P↔Q’ and ‘Q,’ we can derive ‘P.’” (188). [It cannot be that P is true and Q is false, so if we know P, then we can derive Q. I wonder why there is no mention of negated forms. Suppose we have P↔Q and we also have ¬P, I would think we could then derive ¬Q, using the same reasoning.]
Agler’s example is:
(P↔Q)↔(R→T), R→T ⊢ P↔Q
[Here we see that we have the right side of the biconditional as another proposition, so we can derive the right side using biconditional elimination.]
[Note, in my version of the text, it shows R↔T instead of R→T in line 1.]
Agler, David. Symbolic Logic: Syntax, Semantics, and Proof. New York: Rowman & Littlefield, 2013.