17 Apr 2016

Suppes (9.9) Introduction to Logic, “Elementary Principles about Operations on Sets”, summary



[Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]

[The following is summary. My commentary is in brackets. Boldface is mine. I apologize in advance for any distracting typos or other errors.]
 



Summary of

 
 
Patrick Suppes
 
Introduction to Logic
 
Ch. 9 Sets
 
§9.9 Elementary Principles about Operations on Sets
 
 
Brief summary:
There are a number of tautological equivalences that can be useful when making proofs about sets. Here is a list [note that Λ is the empty set and V is the domain of discourse]:
Suppes. 9.9a
As we can see, the list expresses a number of properties, like identity,  commutativity, distributivity, associativity, excluded middle, non-contradiction, idempotency, absorption, and De Morgan’s laws.
 
 
Summary
 
 
[Recall from section 9.8 how we drew Venn diagrams to depict the properties of and relations between sets. We then used the diagrams to determine if syllogistic inferences were valid, by seeing if the conclusion of the syllogism accords with the diagram for the premises.] We now are interested in finding certain equations, “or principles of other forms” that are always true for all sets. To do so, we could use Venn diagrams. “However, a method which has more general applicability and more interest from a mathematical standpoint is to reduce the test of the truth of an equation for all sets to the problem of deciding if a corresponding formula is a tautological equivalence, which can always be done by sentential methods” (202). Suppes then provides an example. We first suppose that we want to show that for sets A and B, the union of (or combined total of all members from) A and B is the same as of B and A, or:
(1)       AB = B A
(Suppes 202)
[Recall the following from section 9.5.] We know already from the definition of the union operation that for any arbitrary x
(2)         x ABx A xB
(Suppes 203)
Suppes then notes the following tautological equivalence:
(3)        x AxB xB x A
(Suppes 203)
[I am not entirely sure how to show this is a tautology. Perhaps this truth table representation is sufficient.
Suppes. 9.9 a
] Suppes then writes,
And corresponding to (2), we have:
(4)        x BAxB x A
(Suppes 203)
[Recall that 2 is: x ABx A xB. It seems that we are saying with 2 that something is a member of the union of a first set with a second set if and only if it is a member of the first set or it is a member of the second set. Proposition 4 also follows this pattern. Perhaps that is what is meant by “corresponding”, but I am not sure] [In section 5.1 Suppes wrote, “for any x and y, if x=y, then y=x. This fact is some­ times expressed by saying that the relation of identity is symmetric. We notice also that if x=y and y=z, then x=z. This fact is expressed by saying that the relation of identity is transitive” (102).] Suppes then says we combine 2 through 4 from the transitivity and symmetry of equivalence, thereby concluding
(5)       x ABx B A
(203)
[Let us try to see how this combination works. Here again are our sentences:
(2)        x ABx A xB
(3)        x AxB xB x A
(4)        x BAxB x A
Use symmetry to invert 4:
(4)        x BAxB x A
(4b)     xB x A x B A 
Then use transitivity to remove the middle term in 3 and 4b
(3)        x AxB xB x A
(4b)     xB x A x B A 
(*)         x AxBx BA 
Then use transitivity to remove the middle term in 2 and (*):
(2)       x ABx A xB
(*)        x AxB x BA 
(**)      x ABx BA 
Which is the same as 5, the one we wanted to obtain:
(**)      x ABx BA 
(5)        x ABx B A
] [Recall from section 9.2 the principle of extensionality for sets. It says that there is a principle of identity for sets when they have the same members: A=B↔(x)(xAxB). So in our current case we have:
x ABx B A
Since the members x of the set AB are also the members of the set BA, this means that the two sets are identical.] Thus, Suppes explains, we can “conclude from 5 by the principle of extensionality for sets that
(6)      AB = B A
(Suppes 203)
This is what we set out to prove. Suppes explains that normally we can omit the inferences from 5 to 6 and from 1 to 2. Also, we can present these equivalences like the identities we saw in chapter 7. Thus the informal proof of AB=BA could be:
x ABx A xB
                    ↔ xB x A
                    ↔ x B A
(Suppes 203)
We may have noticed that this proof does not have justifications marked for each line. This is because the inferences either use tautological equivalences or
one of the four characterizing equivalences:
x ABx A xB
x ABx A & x
x A ~ Bx A & xB
 x ∈ ~ Ax A
(Suppes 203)
Suppes will demonstrate how these sorts of proofs work by proving the following formulation:
A ~ (A B) = A ~ B
[Before we continue, let us first visualize this somehow with the Venn diagrams. So we draw two sets, A and B.
Suppes. 9.9 b
Let us begin with the first part of the equation, A~(AB), and let us see if it marks the same region as A~B. And let us start more specifically with AB.
Suppes. 9.9 c
Now we want A~(AB), in other words, what is in A that is not in AB.
Suppes. 9.9 d
So we need to find out if the other side of the equation marks exactly the same region as this:
Suppes. 9.9 e
The other formula is A~B, in other words, what is in A that is not in B. As we can see, it is in fact the same region.
Suppes. 9.9 f
So again, we are trying to prove:
A ~ (A B) = A ~ B
As far as I can tell, Suppes will conduct the proof by taking the first half of the formula to be proven, alter it by means of equivalences or tautologies, so that it finally becomes identical to the second part of the formula. So for the first step of the proof, he uses this equivalence mentioned above:
x A ~ Bx A & xB
As we can see, it fits the first half of the formula to be proved, which was  A ~ (A B). So we now have:
(1)  x A ~ (A B) ↔ x A & xA B
For the next and following lines, we keep the left side of the biconditional the same, and we just modify the right side. It seems we apply this following rule for step 2, in order to eliminate the intersection of A and B:
x ABx A & xB
Yet note that the intersection we want to eliminate is a matter of non-membership:
xA B
I might be mistaken, but it seems Suppes first will convert this to a negated form of membership. So:
–(x AB)
No we use the equivalence
x ABx A & xB
to go from 1 to 2 in the following:
(1)  x A ~ (A B) ↔ x A & xA B
(2)                                 ↔ x A & (x A & x B)
To get to the next step, we perhaps should refer to an equivalence Suppes makes earlier in the book. It is one of De Morgan’s Laws (see p.34):
(P & Q) ↔ P ∨ Q
(Suppes 34)
(2)                                 ↔ x A & (x A & x B)
(3)                                 ↔ x A & (x A x B)
To make the next step, it seems we need to use a distributive law. On page 131 he describes it with the following formulation:
x ★(y z) = (x y) ∘ (x ★ z).
(Suppes 131)
(3)                                 ↔ x A & (x A x B)
(4)                                 ↔ (x A & x A) ∨ (x A & x B)
He explains that to make the next step, we use the tautological equivalence:
(P & P) ∨ Q  ↔ Q
Suppes. 9.9 g
(P & P) ∨ Q  ↔ Q
(4)                                 ↔ (x A & x A) ∨ (x A & x B)
(5)                                 ↔ x A & x B
To make the next step it seems we use this equivalence from the above list:
x A ~ Bx A & xB
(5)                                 ↔ x A & x B
(6)                                 ↔ x A ~ B
Let us compare what we are trying to prove with that final line in full:
A ~ (A B) = A ~ B
x A ~ (A B) ↔ x A ~ B
Somehow line 6 is the final line of the proof, but I am not sure about the fact that our proof has the “x ∈ ” component to it, but the original formulation does not. They are equivalent, but I do not know the rule that says so. I also do not know how we went from A ~ (A B) in the original formula to the x A ~ (A B) in the first line of the proof. This is something he does in other cases too. Here is the whole proof.]
(1)  x A ~ (A B) ↔ x A & xA B
(2)                                 ↔ x A & (x A & x B)
(3)                                 ↔ x A & (x A x B)
(4)                                 ↔ (x A & x A) ∨ (x A & x B) (5)                                 ↔ x A & x B
(6)                                 ↔ x A ~ B          Q.E.D.
(Suppes 203)
 
Suppes then gives a list of useful identities and other principles [recall from section 9.4 that Λ is the empty set and from section 9.6 that V is the domain of discourse or domain of individuals (For example, in sociology, if we speak of the set of albinos, we implicitly mean only those albinos within the specific set of humans)] :
Suppes. 9.9a
Regarding their order, Suppes says that
the first eight may be taken as axioms of a theory formalized in first­ order predicate logic with identity; with the addition of the axiom: (∃A)(∃B)(A≠B) and the appropriate definition of set difference the remaining twenty-three principles may be derived as theorems. Any model of these axioms is called a Boolean algebra, and the axioms are some­ times called the axioms for the algebra of sets. 
(204)
 
Suppes then discusses the names of these equations. So recall the first one.
(1) A ∪ Λ = A
[I think Suppes point is that the empty set when united with another set is no more than that other set, but let me quote:] “Identity (1) says that the empty set is a right-hand identity element with respect to the union operation” (205).
(2) A ∩ V = A
[For this notion of right-hand identity element, Suppes earlier wrote:
THEOREM 1 (EXISTENCE OF RIGHT-HAND IDENTITY ELEMENT).
(∃y)(x)(x ∘ y = x)
(Suppes 105)
So it seems something is a right-hand identity element if when combined to the right of another element it does nothing to alter that other element.] [Recall section 9.6 that how A is a subset of the domain V. Thus were they to intersect, you would have a set no different from A.] “(2) says that the domain V of individuals is a right-hand identity element with respect to intersection.
(3) AB = B A
(4) AB = B A
(Suppes 204)
(3) and (4) are the commutative laws for union and intersection.
(Suppes 205)
[Earlier Suppes formulates the commutative laws as: P&Q↔Q&P, and P∨Q↔Q∨P. (p.34)]
(5) A ∪ (BC) = (AB) ∩ (A C)
(6) A ∩ (BC) = (AB) ∪ (A C)
(Suppes 204)
Identities (5) and (6) are dual distributive laws. Notice that if we compare union to arithmetical addition and intersection to arithmetical multiplication, the analogue of (5) does not hold in arithmetic.
(Suppes 205)
[So let us consider: 5+ (2 x 3) = (5 + 2) x (5 + 3). Here the left side equals 11, and the right side equals 17. So (5) does not hold in arithmetic when substitute the operations. What about (6)? 5 x (2 + 3) = (5 x2) + (5 x 3). Here the left side equals 25 and the right side equals 25.]
(7) A ∪ ~A = V
(Suppes 204)
Identity (7) is the set formulation of the law of excluded middle; everything in a given domain is either in a set or its complement. (Suppes 205)
[The law of non-contradiction says you cannot have a proposition and its negation both be true.]
(8) A ∩ ~A = Λ
(Suppes 204)
In (8) we have the law of contradiction; nothing can be a member of both a set and its complement.
(Suppes 205)
[Or, a set shares nothing in common with whatever else is not in that set.]
(9) AA = A
(10) AA = A (Suppes 204)
Equations (9) and (10) express what is usually called the idempotency of union and intersection. The problem of finding an arithmetical operation which is idempotent is left as an exercise.
(Suppes 205)
[Roy Cook defines idempotent, in his A Dictionary of Philosophical Logic, as:
IDEMPOTENT1 A unary function or operation is idempotent if and only if applying that function twice to an argument produces the same result as applying it once. In other words, a unary function f is idempotent if and only if, for all x, we have:
f(x) = f(f(x))
IDEMPOTENT2 A binary function or operation is idempotent if and only if, when we apply it to two instances of the same argument, we receive that same argument. In other words, a binary function f is idempotent if and only if, for all x, we have:
x = f(x, x)
In classical logic and most non-standard logics, the truth functions corresponding to conjunction and disjunction are idempotent.
(Cook 141)
If we had to find idempotent arithmetic operations, perhaps a unary one would be the absolute value and a binary one equality. See the wiki page.]
(14) ~~A = A
(15) A = ~B → B = ~A (Suppes 204)
Principle (14) is the law of double negation, and (15) expresses a principle of contraposition.
(Suppes 205)
[Cook defines contrapositive this way:
CONTRAPOSITIVE1 Within categorical logic, the contrapositive of a categorical proposition is obtained by switching the subject term and predicate term of the proposition, and replacing the subject term and the predicate term with their complements.
(Cook 69)
]
(18) A ∪ (BC) = (AB) ∪ C
(19) A ∩ (BC) = (AB) ∩ C
(20) A ∪ (A ∩ B) = A
(21) A ∩ (A ∪ B) = A
(Suppes 204)
In (18) and (19) we have associative laws, which justify the omission of parentheses, and in (20) and (21) laws of absorption.
(Suppes 205)
[Earlier in the text, Suppes described the associative laws with this formulation:
(x) (y) (z) (x ∘ (y ∘ z) = (x ∘ y) ∘ z).
(Suppes 105)
And Cook defines absorbsion this way:
ABSORBSION Given two binary functions f and g, absorbsion holds between f and g if and only if, for all a and b:
f(a, g(a, b)) = g(a, f(a, b)) = a
Within Boolean algebra, absorbsion holds between the meet and join operators – that is:
A ∩ (A ∪ B) = A
A ∪ (A ∩ B) = A
In classical propositional logic, absorbsion holds between the truth functions associated with conjunction and disjunction – that is:
A ∧ (A ∨ B) and:  |
A ∨ (A ∧ B) are logically equivalent to:
A
(Cook 5-6)
]
(23) ~(AB) = ~A ∩ ~B
(24) ~(AB) = ~AB
(Suppes 204)
Equations (23) and (24) are De Morgan’s laws.
(Suppes 205)
[Previously Suppes formulated De Morgan’s laws as follows:
(P & Q) ↔ P ∨ Q
(P ∨ Q) ↔ P & Q
(Suppes 34)
]
 
Suppes then remarks that if we want to prove any of these principles that have the empty set Λ or the domain of individuals V, we need to also use the following:
x ∉ Λ
x ∈ V
Suppes will now show this with a proof. We will consider this principle:
(13) Λ ≠ V
[Suppes will show that a proof for this can use the assumptions x ∉ Λ and x ∈ V. Since he says that we need these assumptions for a proof, I presume that it is impossible to make the proof without them, but I am not sure how to show that.] [Recall again from section 9.2 the principle of extensionality for sets:
A = B ↔ (x)(xAxB)
(Suppes 178)
]
For the proof, we first suppose Λ = V.
Then by the principle of extensionality
x ∈ Λ ↔ x ∈ V
but
x ∉ Λ
whence
x ∉ V,
which is absurd. Q.E.D.
(Suppes 205)
[As far as I can tell, the absurdity results from the contradiction with the assumption x ∈ V.]
 
[The next couple ideas I will not be able to express properly. It seems he is saying that we can use tautological implication instead of tautological equivalence and still obtain useful principles of inclusion. I am not at all certain, but perhaps his point is that to obtain these principles it is not enough to use the equivalences we listed above. I probably have this wrong. Let me quote:]
By considering an arbitrary member of a set and using tautological implications rather than tautological equivalences, useful principles of inclusion for sets are easily established. As an example we may consider:
ABA. |
PROOF.
(1) xA B xA & x ∈ B
(2)                     → xA.             Q.E.D.
Line (1) is an equivalence; the right-hand side of (1) tautologically implies the right-hand side of (2) but is not tautologically equivalent to it. The characterizing equivalence used for inclusion is, of course:
A B ↔ (x) (xA xB).
(Suppes 205-206)
 
[The next point I also will not be able to explain. The idea seems to be that when making these proofs for inclusion, there are instances when we might think we should use an equivalence but really we should use a implication. This would be for cases when instead of showing an equivalent relation to the right side of a line above we instead want to show an implication relation to the left side of a line above. Let me quote so you can interpret it better.]
In writing down proofs of inclusion relations between sets, one point needs to be remarked. Consider:
(AB)~(B ∩ C) ⊆ A.
PROOF.
(1) x ∈ (AB)~(B ∩ C) ↔ xAB & x B ∩ C
(2)                                        → xAB
(3)                                        → xA & xB
(4)                                        → xA              Q.E.D.
The point is that although line (3) is tautologically equivalent to line (2), we use an implication sign in line (3), because this implication sign indicates the relation of (3) to the left side of (1), not to the right side of (2).
(Suppes 206)
 
Suppes’ final point is that we can use this method to verify the inclusion relations that correspond to all the tautological implications listed in Chapter 2. (Suppes 206) [Below is displayed a list from chapter 2.]
Suppes. 9.9b from 2.2
 
 
 
From:
Suppes, Patrick. Introduction to Logic. New York: Van Nostrand Reinhold / Litton Educational, 1957.
 
If otherwise noted:
Cook, Roy T. A Dictionary of Philosophical Logic. Edinburgh: Edinburgh University Press, 2009.
 
Michael Rieppel’s online truth table generator
http://mrieppel.net/prog/truthtable.html
 

 
 
.

No comments:

Post a Comment