16 Jul 2016

Nolt (14.1) Logics, ‘Higher-Order Logics: Syntax’, summary


[Search Blog Here. Index-tags are found on the bottom of the left column.]
 

[The following is summary. All boldface in quotations are in the original unless otherwise noted. Bracketed commentary is my own. Please if you will overlook my typos and other errors, as proofreading is incomplete.]
 


Summary of

John Nolt
 
Logics

Part 4: Extensions of Classical Logic

Chapter 14: Higher-Order Logics

14.1 Higher-Order Logics: Syntax




Brief summary:
In first-order logic, we can have quantifiers that quantify over variables that stand for individuals. In second-order logic, we can have quantifiers that quantify over predicates. In this way, we can express the following inference, for example: “Al is a frog. Beth is a frog. Therefore, Al and Beth have something in common.” We can write it as: ‘Fa, Fb ⊢ ∃X(Xa & Xb)’. Here we have the predicate variable ‘X’, which allows us to refer to some unspecified predicate as a variable. Third-order logic allows us to quantify over predicates for properties of individuals. Take this inference for example: “Socrates is snub-nosed. Being snub-nosed is an undesirable property. Therefore, Socrates has a property that has a property.” If we use larger type-face for the third-order predicate variable, we can write this as:
14.1.c_thumb
[Here ‘s’ = Socrates, ‘N’ = ‘is snub-nosed, ‘U’ = ‘is undesirable’, ‘X’ is a predicate variable, and ‘Y’ is a predicate variable for another predicate. The conclusion says that there is some property such that Socrates has it and also that this property itself has some property.]  Higher-order logics are possible as well. Variables on each order are of a different type. An infinite hierarchy of higher-order logics is called a theory of types. Higher-order logic is especially useful for using logic to formulate the properties of numbers and other concepts in arithmetic. The view that all mathematical ideas can be defined in terms of purely logical ideas, and also that all mathematical truths are logical truths, is called logicism. We can use second-order logic to express a number of important logical ideas. One of them is identity. Leibniz’s law says that objects are identical if and only if they share exactly the same properties. It is written:
Leibniz’s Law
a = b ↔ ∀X(Xa ↔ Xb)
It is analyzable into two subsidiary principles.
The Identity of Indiscernibles
X(Xa ↔ Xb) → a = b
This says that if two things are indiscernible, as they share exactly the same properties, then they are identical. The other is
The Indiscernibility of Identicals
a = b → ∀X(Xa ↔ Xb)
This says that if two things are identical, then they share exactly the same properties. We can express an analogy as:
∃Z(Zab & Zcd)
This says that “there is some respect in which a stands to b as c stands to d” (Nolt 386). The idea that ‘All asymmetric relations are irreflexive’ can be written as:
∀Z(∀xy(Zxy → ~Zyx) → ∀x~Zxx)


Summary

Nolt has us consider the following valid inference:
Al is a frog.
Beth is a frog.
∴ Al and Beth have something in common.
(Nolt 382)
Nolt then symbolizes this inference in predicate logic. We will take ‘H’ to be a three place predicate meaning “... and ... have ... in common” (382). We then formulate it as:
Fa, Fb ⊢ ∃xHabx
(Nolt 382)
So this would mean that ‘a’ is a frog, ‘b’ is a frog, therefore, there is something which ‘a’ and ‘b’ have in common. [Nolt observes that this is not valid. I am not sure how to show that it is invalid, but we notice that the conclusion has a predicate not mentioned in the premises.] In the conclusion, the x appears in the formulation to stand for some object on the order of objects ‘a’ and ‘b’. However, what it really means is a predicate of the sort F or “is a frog,” because that is what they share in common. Now suppose instead we had a variable for the predicate, call it X. We then could validly write the inference as:
Fa, Fb ⊢ ∃X(Xa & Xb)
where ‘X’ is not a specific predicate, but a variable replaceable by predicates – a variable that stands for properties. Thus the conclusion asserts that there is a | property X which both Al and Beth have [...].
(Nolt 382-283)

So when we allow for the predicates or functions themselves to be quantifiable variables, we can express certain ideas that previously we were unable to express. For instance, “We may formalize ‘Everything has some property’, for example, as ‘∀xYYx’ ” (Nolt 383). [That expression perhaps means, for all things x, there is a property Y such that x is Y.]  Nolt has us note that we now have two types of variables: {1} lowercase variables for individuals and {2} uppercase variables for properties. “When we quantify over both individuals and properties, we are using second-order logic.
A logic which quantifies over both individuals and properties of individuals is called a second-order logic, as opposed to systems such as classical predicate logic, which quantify only over individuals and are therefore called first-order logics.
(Nolt 383)
Nolt then gives an example where we have an even higher order of property variability. The argument written out is:
Socrates is snub-nosed.
Being snub-nosed is an undesirable property.
∴ Socrates has a property that has a property.
(Nolt 383)
[So it seems it works like the following. In the first sentence, we have our first order of properties. Socrates has the property of being snub-nosed. Then we say that being snub-nosed is itself an undesirable property. So it is a property that as a property has another property. Then when we quantify for it, we are making not just the property a variable, but we are making the property of the property a variable. As I am not sure I can represent the type-face size differences, let me place this as an image.]
14.1.a
(Nolt 383, with footnote 1 reading: “This difference is usually marked by special subscripts or superscripts, but for the simple illustrations given here, variation in size is more graphic.”)

In the formulation, we note that ‘N’ takes the predicate position in the first premise, as Ns, while it takes the subject position in the second premise, as UN. So while in the first case it is the property of being snub-nosed, in the second case it is the name for that property, now itself being predicated by yet another predicate, namely, ‘is undesirable’. [So recall that Nolt defined second-order logic as quantifying over both individuals and properties of individuals. Now we are quantifying still over individuals and their properties, but we as well are quantifying over the properties of those properties. We thus would call this a third order logic.]
Logics which quantify over properties of properties in this fashion are called third-order logics. And there are fourth-order logics, fifth-order logics, and so on. Any logic of the second order or higher is called a higher-order logic. Higher-order logics use a different type of variable for each domain of quantification (individuals, properties of individuals, properties of properties of individuals, and so on). An infinite hierarchy of higher-order logics is called a theory of types.
(Nolt 383)

Nolt notes that Frege invented higher-order logic for the purpose of analyzing the concept of natural number (Nolt 383). To explain Frege’s notion, Nolt has us consider the number two, and he asks, what exactly is this number? He observes that many things can have the property of being two, so it would seem to be a property then. One example are the earth’s poles. But he notes that their property of being two is not something either one has. Rather, what has this property is the set that contains just these two poles. “Twoness, then, is a property of this set or its defining property, not of the individual letters” (Nolt 383). Frege specifically defined twoness “as the property of being a property with two examples” (Nolt 383d). Thus the property itself of “being a pole has the higher-order property of twoness because it is exemplified by two individual objects” (384a).

Nolt next shows how to formalize Frege’s insight into second-order logic. Here, ‘P’ will be a first-order predicate that stands for ‘is a pole’. [For the next formulation, Nolt has us recall the sorts of structures using the equality operator that he discussed in section 6.3. The schema he will use is for numerical quantifiers. He had the example for “there are exactly two minds.” We write it as:
xy(~x = y & ∀z(Mz ↔ (z = xz = y)))
This structure might be saying the following. Firstly there is some object x and some object y, but they are not the same object. Next, we are saying that were there some such object x that is a mind, then any other object z would have to actually be identical either to object x or to object y. And vice versa, were there to be some other object z that is identical either to object x or object y, then this object is a mind. In other words, there are two different objects, x and y, and only these two objects are minds. Thus there are exactly two minds.] Here is how we formulate that there are exactly two poles.
xy(~x=y & ∀z(Pz ↔ (z = xz = y))
(Nolt 384)
[So here we are using the specific predicate ‘P’ for ‘is a pole’. But we can also make a predicate variable using the second-order structure.]
More generally, using the predicate variable ‘X’, we can say that exactly two things have property X like this:
xy(~x=y & ∀z(Xz ↔ (z = xz = y))
This expression is in effect a one-place predicate whose instances are properties rather than individuals.
(Nolt 384)
So as we can see, we are saying in the above formulation that there are only two individuals that take the variable predicate. If that predicate takes any number of individuals other than two, it will be false. [So already with the X predicate we are using second-order logic. Next Nolt will then predicate the variable predicate X, and then we will be using third-order logic.] Nolt will now use the extra large typeface to indicate the predicate for properties, and this predicate means, ‘is a property exemplified by two individuals’. [Let me place a copy of the text here as I cannot reliably replicate the typeface size differences.]
14.1.b
(Nolt 384)

As we can see, higher-order logic is particularly useful for constructing arithmetic, because it allows us to quantify over numbers and then to introduce higher-order predicates to further qualify them, such as ‘is prime’, ‘is greater than’, and so on (Nolt 384). This even led to Whitehead and Russell to claim
mathematics itself is nothing more than logic. More precisely, they argued that all mathematical ideas can be defined in terms of purely logical ideas and that all mathematical truths are logical truths. This thesis, known as logicism, has, however, met with serious technical difficulties and is now in disrepute.
(Nolt 384-385)

Another idea that second-order logic allows us to express is identity. By using second-order quantifiers and the biconditional, we can express what is called “Leibniz’s law.” Nolt will not use the definition for it that applies the biconditional but will rather use the identity operator. Leibniz’s law
is the principle that objects are identical to one another if and only if they have exactly the same properties. In formal terms:
a = b ↔ ∀X(Xa ↔ Xb)
(Nolt 385)
[This seems to be saying that one term is identical with another if and only if all properties that hold for the one hold for the other.] [Now, since a biconditional is decomposable into two conditionals (for a similar idea, see Agler section 5.3.10 where he proves P→Q,Q→P ⊢ P↔Q)] “Leibniz’s law itself is sometimes further analyzed into two subsidiary principles” (Nolt 385). One is the
identity of indiscernibles:
X(Xa ↔ Xb) → a = b
(Nolt 385)
[Here the idea is that if two things are indiscernible with regard to their properties, then they are identical.] The other is the
indiscernibility of identicals:
a = b → ∀X(Xa ↔ Xb)
(Nolt 385)
[Here the idea is that if two things are identical, then they share the same properties.]
The first of these formulas says that objects that have exactly the same properties are identical, and the second says that identical objects have exactly the same properties. Leibniz's law is equivalent to their conjunction.
(385)

[So we have just seen how higher-order quantifiers can range over properties. This seems to be for one-place predicates.] Higher-order quantifiers may also range over relations. [Here we would be dealing with two-place or greater predicates.] So suppose we have this argument:
Al loves Beth.
∴ Al has some relation to Beth.
This argument may be formulated as
‘Lab ⊢ ∃ZZab’
(Nolt 385)
[Here the idea seems to be that because Al relates to Beth in some specific way, then we can infer that Al relates to Beth in some (unspecified) way. The ‘∃ZZab’ perhaps could be written ‘∃Z(Zab)’, so that the two Z’s are not confused.]

Nolt then shows how we can use higher-order logic to express analogies. He notes that they often take the form:
a stands to b as c stands to d
For instance:
Washington D.C. is to the USA as Moscow is to Russia,
the analogy here being the relationship between a country and its capital city
(Nolt 386)
An analogy between four terms can be written as:
∃Z(Zab & Zcd)
This says that there is some respect in which a stands to b as c stands to d.
(Nolt 386)

Nolt then explains that in logic, there are a number of important generalizations, like ‘every symmetric relation is irreflexive’, that can be expressed using higher-order logic. [For this idea, first recall from Suppes Intro section 10.3 the notion of asymmetry:
A relation R is asymmetric in the set A if, for every x and y in A, whenever xRy, then it is not the case yRx. In symbols:
R asymmetric in A ↔ (x)(y)[x A & y A & xRy → –(yRx)].
The relation of being a mother is asymmetric, for obvious biological reasons.
(Suppes 214)
And the notion of irreflexivity:
A relation R is irreflexive in the set A if, for every x in A it is not the case that xRx. In symbols:
R irreflexive in A ↔ (x)(x A → –(xRx)).
The relation of being a mother is irreflexive in the set of people, since no one is his own mother. The relation < is irreflexive in the set of real numbers, since no number is less than itself.
(Suppes 213)


] Nolt explains that
An asymmetric relation is one such that if it holds between x and y it does not hold between y and x. An irreflexive relation is one that does not hold between any object and itself.
(Nolt 386)
Nolt then makes this formalization:
‘All asymmetric relations are irreflexive’
∀Z(∀xy(Zxy → ~Zyx) → ∀x~Zxx)
(Nolt 386)

[Here the idea seems to be the following. We said that the asymmetric relation is one where the inverse relation does not hold. The first main antecedent then basically says, “for all relations that are asymmetric...”. Irreflexive we said means that it cannot hold for a term in relation to itself. So the main consequent says, “... then the relation is irreflexive”.]
 
For the remainder of the chapter, Nolt will stick primarily to second-order logic, which will be sufficient for discussing the important ideas in type theory (Nolt 386).
 
[Recall from section 11.2.1 that a zero-place predicate is a sentence letter and is thus like a proposition in propositional logic.] Nolt says that some second-order logics allow us to quantify for whole propositions, when they are understood as being zero-place predicates (sentence letters). So suppose we have the valid formula or ‘P ∨ ~P’. From this we can infer ∀X(X ∨ ~X) or ‘every proposition is such that either it or its negation holds’ (Nolt 386). But Nolt then adds, “But the interpretation of such formulas is problematic, unless we regard propositions as truth values – in which case it is trivial. So we will not discuss this sort of quantification here” (Nolt 386).
 
Nolt’s final point is that “The syntax of second-order logic is like that of first-order predicate logic (including the identity predicate),” but there are two exceptions. The first is that “We now reserve the capital letters ‘U’ through ‘Z’, which before were predicates, to be used with or without subscripts as predicate variables” (386). And the second is that we need to add a clause to our formation rules. That clause is “If φ is a formula containing a predicate ψ, then any expression of the form ∀ΔφΔ/ψ or ∃ΔφΔ/ψ is a formula, where φΔ/ψ is the result of replacing one or more occurrences of ψ in φ by some predicate variable Δ not already in φ” (Nolt 386). [Here the idea seems to be that we are explaining a rule for what constitutes a wff now that we have predicate variables. We would have a valid formula if it is based on another formula where there is a specific predicate and then from that other formula derive our formula in question by substituting a predicate variable for one or more occurences of the specific predicate with a predicate variable not already occurring in the formula.]
 
 

From:

Nolt, John. Logics. Belmont, CA: Wadsworth, 1997.


.











No comments:

Post a Comment