## 15 Jan 2018

### Priest (1.2) “Multiple Denotation, Ambiguity, and the Strange Case of the Missing Amoeba”, ‘1.2. LP Semantics’, summary

[Search Blog Here. Index tabs are found at the bottom of the left column.]

[Central Entry Directory]

[Logic and Semantics, entry directory]

[Graham Priest, entry directory]

[Priest, “Multiple Denotation, Ambiguity, and the Strange Case of the Missing Amoeba”, entry directory]

[The following is meant to be summary, but is in large part my best guesswork of the text’s meaning, made on the basis of other texts by Priest and using examples from Agler. So please consult the original text rather than trust my notes below. Also, proofreading is incomplete, so you will find typos and other mistakes.]

Summary of

Graham Priest

“Multiple Denotation, Ambiguity, and the Strange Case of the Missing Amoeba”

1

Formal Considerations

1.2

LP Semantics

Brief summary:

We formulate a paraconsistent LP semantics, meaning that we can have interpretations where a formula is evaluated as both true and false, but not as neither. We do this using a truth-evaluating relation R rather than a truth-evaluating function v. We assign the base formulas true and/or false by writing R(α, 1) and/or R(α, 0). For formulas with universally quantified variables, they are related to 1 if all substitutions of those variables for members of the domain (or constants assigned to those members) yield the value of 1, and they are 0 otherwise. For existentially quantified variables, they are truth-value-related to 1 if there is at least one substitution that yields 1, and they are 0 otherwise. Formulas built up by means of connectives are evaluated according to classical rules, but given that the base formulas can have relations to both values, the more complex formulas can yield both values too. Validity “is defined in terms of truth preservation in all interpretations and evaluations of the variables.” An interpretation of a predicate is classical if the intersection of its extension and anti-extension is an empty set. LP is a proper sub-logic of classical logic, and all the logical truths of LP are those of classical logic. Identity in this LP semantics is understood as the set of couples (from the domain) whose two terms are [taken to be] the same. We cannot however have any non-identical couplings be excluded from the identity predicate/relation’s anti-extension [because in our LP semantics there can be no items that are neither identical nor non-identical to themselves or to another item. However, (as far as I can tell, at least, and I still need to have this confirmed) we can have a pairing of something and itself be in both the extension and anti-extension of the identity predicate/relation, meaning that thing is both identical and not identical to itself.] Also, substitutivity of identicals holds in this semantics.

Contents

1.2

LP Semantics

1.2.1

[Truth Evaluation and Validity in an LP Propositional and First-Order Semantics with a Relation-Based Structure for Truth-Evaluations (Like First Degree Entailment, Only without the Option of No Valuation.)]

1.2.2

[Familiar Formulations for Evaluating Complex Formulas. LP. Classical Predicates. LP as a Proper Sub-Logic of Classical.]

[Identity and Substitutivity in LP. ]

Summary

1.2

LP Semantics

1.2.1

[Truth Evaluation and Validity in an LP Propositional and First-Order Semantics with a Relation-Based Structure for Truth-Evaluations (Like First Degree Entailment, Only without the Option of No Valuation.)]

[An LP semantics can be formulated by evaluating with relations instead of functions. This evaluating relation R allows for a formula to have a truth-relation to 1, to have a truth-relation to 0, or to have both such relations. But it is important that we stipulate that every formula be assigned to at least one value (for otherwise it would not be an LP sort of system but one more like First Degree Entailment.) For propositions built up using connectives, we relate them to particular truth-values using a set of rules which will allow us to handle instances where there is more than one truth-value for a formula. The criteria for the evaluations are like what we would expect for a classical sort of evaluation, but in some cases they may yield two truth relations, depending on what the calculations produce. In a first-order language, we have function d, which assigns to each constant a member of the domain and to each predicate its extension and anti-extension. We then say that a formula is 1 if its interpretation is in the extension and 0 if its interpretation is in the anti-extension. For formulas with universally quantified variables, they are related to 1 if all substitutions of those variables for members of the domain (or constants assigned to those members) yield the value of 1, and 0 otherwise. For existentially quantified variables, they are truth-value-related to 1 if there is at least one substitution that yields 1, and they are 0 otherwise. Validity “is defined in terms of truth preservation in all interpretations and evaluations of the variables.”]

We now will consider the semantics of LP (see section 7.4 of Priest’s Introduction to Non-Classical Logic). According to a certain view, paradoxical sentences are thought to have more than one truth value. In order to accommodate these cases, we can keep our classical semantics, except now instead of evaluating truth values using a function, we do so instead with a relation. [See Introduction to Non-Classical Logic section 8.2 on the semantics of First Degree Entailment, which seems similar to what we do in the following, except here we cannot have the situation where a formula lacks a relation to 1 or 0.] So for propositional logic, we first define the relation of each atomic proposition to at least one truth value [thereby allowing for both truth values, but not allowing for neither truth values.] Then for formulations built up using connectives, we combine “the truth values of its components in all possible ways” (362). [For how this might work, let us draw from section 8.2.7 of Priest’s Introduction to Non-Classical Logic. I will modify the notation to be more like that of the current text. Let us begin with the simplest case to illustrate the notion of “combining the truth values of its components in all possible ways”. In this first example, there will only be one component and thus not yet any combinations to make, but still there will be the potential for multiple assignments. So suppose we only have one proposition, p. And suppose its values is defined as both 1 and 0. That means we will have:

R(p, 1)

and

R(p, 0)

And suppose we want to determine the value of ¬p. What are the possible ways to make that evaluation? First let us look at our rule for negation, given in the simplified form in section 1.2.2 below:

Rs(¬α, 1) iff Rs(α, 0)

Rs(¬α, 0) iff Rs(α, 1)

So, according to the evaluation of p that is R(p, 1), ¬p is 0:

Rp, 0)

But that is only one way to fulfill the criteria. For p, we also have R(p, 0), so in accordance with that evaluation, ¬p is 1.

Rp, 1)

So ¬p is both true and false:

Rp, 0)

and

Rp, 1)

Now let us look at an example from Priest’s Introduction to Non-Classical Logic section 8.2.7 where we need to deal with “combining the truth values of its components in all possible ways”, in other words, where we have more than one component and where we have more than one valuation for a component. Here first is a quotation:

As an example of how these conditions work, consider the formula ¬p ∧ (q r). Suppose that 1, 0, 1 and 0, and that ρ relates no parameter to anything else. Since p is true, ¬p is false; and since p is false, ¬p is true. Thus ¬p is both true and false. Since q is true, qr is true; and since q is not false, q r is not false. Thus, q r is simply true. But then, ¬p ∧ (q r) is true, since both conjuncts are true; and false, since the first conjunct is false. That is, ¬p ∧ (q r)ρ1 and ¬p ∧ (q r)ρ0.

(Priest’s, Introduction to Non-Classical Logic, p.143)

Here we have the following relations, which I will reformulate for our current conventions:

R(p, 1)

R(p, 0)

R(q, 1)

R(r, 0)

And we want to evaluate:

Rp ∧ (q r), ???)

Since p has two values, and the others have one, there are two combinations of values for the whole formula. Put simplistically (with the values standing-in for the value assignments of the base formulas), they are:

Combination 1:

¬(1) ∧ (1 ∨ 0)

0 ∧ 1

0

Combination 2:

¬(0) ∧ (1 ∨ 0)

1 ∧ 1

1

So

Rp ∧ (q r), 0)

and

Rp ∧ (q r), 1)

But the evaluations as we will see will be made using a more formal procedure.] Priest then defines the way truth-values are evaluated in a more precise way:

R(¬α, z) iff ∃x(R(a, x)& z = ⊖x)

R(α ∧ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

R(α ∨ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

(362)

[I may not understand these well, so let us work through them.

R(¬α, z) iff ∃x(R(a, x)& z = ⊖x)

Let us begin with:

R(¬α, z)

We are giving the evaluation for the negation of a proposition. This means that z is a truth value, but it is not determined yet which one (the formula will work for both.) Now this part:

x(R(a, x)& z = ⊖x)

I want to first note that in order for me to get this work, I need to see it not that “R(a, x)& z” equals “⊖x” but rather that it is a conjunction of “R(a, x)” and “z = ⊖x”. Probably that is evident by the syntax, but I am unaware of those conventions. So we are assuming that the variables are truth values, so the x of ∃x I think would be saying that ‘there is a truth value ...’ (possibly a different one, because it has a different letter symbol). And with regard to this other truth value, it is related to the unnegated form of the proposition; and also, the first value we mentioned (the one assigned to the negated form of the proposition), would be derived if we applied the negation function to the second truth value we mentioned (the one assigned to the unnegated form of the proposition). So suppose our z here is understood to be 0, and our x is understood to be 1. That means a formula with negation, like ¬α, relates to 1 only if there is another value, 0, which relates to the unnegated form α and which is obtained when the negation function operates on the 1 value (see section 1.1.1 for how these connective functions work). So far, we have only defined what makes a negated form false. But the rule for what makes it true I think is built in, because the z value can be thought of as 1, and the x as 0. So it gives the rule for both cases. The next one was for conjunction:

R(α ∧ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

So the conjunction gets a certain value only under the following conditions. Here we will relate a certain truth-value to the whole conjunction. We will do so on the following basis. We suppose that the first conjunct is related to a value and the second one is also related to a value (without specifying whether those values are the same or not), and we also suppose in conjunction with this that the conjunction operation on the two assigned values will yield the value that we assign to the whole conjunction. So suppose our z value is 1, and thus we are giving the rule for the circumstances under which the conjunction would be related to 1. But let us now jump to z= xy. If z is 1, then xy needs to equal 1, and the only way that can happen is if both x and y are related to 1. (So even though we have three variables, in this case they will need to take the same value in the end). And the formula says before that, in this case, the first conjunct is related to the first truth-value, which is here 1, and the second to the second value, which is also 1. Now for the criteria for the conjunction to relate to 0. In that case, xy would need to yield 0. This can happen if either term is 0 or if both are. And again, one of those values relates to the first conjunct and the other to the second. So this leaves the following possibilities, that z is o, x is 1, and y is 0, that z is o, x is 0, and y is 1, and that z is o, x is 0, and y is 0. What does not work under the assumption would be that z is 0, x is 1 and y is 1, because with that being the case, the conjunction function would not yield a value that equals z, which is a stipulation here. And finally we have:

R(α ∨ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

Here if z is 1, then either the first disjunct is 1, or the second is, or both are. And z is 0 if both disjuncts are 0, to give a shorthand of the above sort of explanation.] “Validity is defined in terms of truth preservation:”

Σ ⊨ α iff for all v, iff R(β, 1) for all β ∈ Σ then R(α, 1)

[Let us look at how it is defined for First Degree Entailment in his Introduction to Non-Classical Logic section 8.2.8:

Semantic consequence is defined, in the usual way, in terms of truth preservation, thus:

Σ ⊨ A iff for every interpretation, ρ, if 1 for all B ∈ Σ then 1

and:

A iff φA, i.e., for all ρ, 1

(Priest, Introduction to Non-Classical Logic, 144)

Now, were we to make these parallel, I would have expected:

Σ ⊨ α iff for all R, iff R(β, 1) for all β ∈ Σ then R(α, 1)

(with boldface for emphasis)

One possibility is that they are equivalent, in other words, that since the R serves for truth valuation, we might as well say v, which practically does the exact same thing and is what we are more used to anyway. The other possibility is that v means a variable in formula a. So perhaps

Σ ⊨ α iff for all v, iff R(β, 1) for all β ∈ Σ then R(α, 1)

means something like, ‘for every substitution of the variables in all of the formulas ...’. If we stick strictly with the definition of v as a valuating function given previously (rather than distinguishing a function v when syntactically it works as a function and a v variable when syntactically it works as a variable), then I suppose we have something like the first option I considered here where it is understood as equivalent to R of this text or to ρ in the other text. I would also note something. In the version found on Priest’s website, there is a hand marking it seems through the v. But the version at JSTOR (and presumably in the print version itself, but I have not checked) there is no line there: I assume this is a stray marking, but it came to my attention while working on that part. I also note a discrepancy between this formulation and the one in Introduction to Non-Classical Logic. Here there are two biconditionals “iff”, but in the other text there is just one.

Σ ⊨ α iff for all v, iff R(β, 1) for all β ∈ Σ then R(α, 1)

(362)

Σ ⊨ A iff for every interpretation, ρ, if 1 for all B ∈ Σ then 1

(Priest, Introduction to Non-Classical Logic, 144)

Regarding the formulation with a biconditional then a conditional, it reminds me a little of what Frederic Schuller says in this course video around 28 minutes or so. So suppose for some reason there are no relation-evaluation configurations at all that make all the premises true in a classical situation, like with the formula

p ∧  ¬p q

then in the formula

Σ ⊨ A iff for every interpretation, ρ, if 1 for all B ∈ Σ then 1

we assign the part reading

for every interpretation, ρ, if 1 for all B ∈ Σ

as 0. For, 0 for all B ∈ Σ. When the antecedent of a conditional is 0, then the whole conditional would be 1. So the part reading:

for every interpretation, ρ, if 1 for all B ∈ Σ then 1

is 1. That side is biconditionally conjoined to

Σ ⊨ A

Now, since one side of the biconditional is 1, that makes the other side 1, and so the inference would be valid. (Yet in fact I am not at all sure how this works.) But in our current text, as we noted, we are using the biconditional “iff” twice.

Σ ⊨ α iff for all v, iff R(β, 1) for all β ∈ Σ then R(α, 1)

I do not know how to translate that symbolically, because I do not how the structure “if and only if Q, then P” works. Is it “P if and only if Q”? Until I learn the proper understanding of that structure, I will not know really how to grasp the way it works.] Let us return now to the text. We will now formulate truth evaluation for our first-order language:

For the first-order case, we now take an interpretation to be a pair, ⟨D, d⟩, where D is a non-empty domain of objects, d assigns each constant a member of the domain, and each n-place predicate, P, a pair ⟨Ep, Ap⟩, (the ex- | tension and anti-extension of P, respectively), such that Ep Ap = Dn. The relation between formulas and truth values is now relative to an evaluation of the free variables, s. In the atomic case, this is defined by the clauses:

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

(362-363)

[So like in section 1.1.2, an interpretation involves a domain of objects D and a function d that assigns each constant to a member in the domain. But unlike before, d works a bit differently for predicates. We first should note the idea of a predicate’s extension and its anti-extension. The following comes from Priest’s One section P.5:

In first-order logic, a predicate has an extensionthe set of things of which it is true—and an anti-extension—the set of things of which it is false. In classical logic, the anti-extension need not be mentioned explicitly, since it is simply the complement of the extension. However, in a paraconsistent logic, the extension of a predicate does not determine the anti-extension. It therefore needs a separate specification.

(Priest, One, p.xx, boldface mine)

We will also need the notion of the union of sets, but let us mention the intersection, because we need that later. This comes from the “Brief summary” of the post on Suppes’ Introduction to Logic section 9.5.

Certain operations can be performed on sets. If we find all the members shared in common between two sets, we are finding their intersection ():

(x)(x A B xA & xB)

When two intersecting sets share no members in common, that is, when they are mutually exclusive sets, their intersection is the empty set. The set containing all the members in total from two sets is their union ():

(x)(x ABx A xB)

(Suppes, Introduction to Logic, pp.184-185)

So we are going to take apart the line:

d assigns [...] each n-place predicate, P, a pair ⟨Ep, Ap⟩, (the ex- | tension and anti-extension of P, respectively), such that Ep Ap = Dn.

(362-363)

Let us return to our example from David Agler’s Symbolic Logic: Syntax, Semantics, and Proof section 6.4.2, as modified for section 1.1 of our current text, but now I will further modify it for section 1.2 (our current section):

D = {Alfred, Bill, Corinne}

C = {a, b, c}

d(a) = Alfred

d(b) = Bill

d(c) = Corinne

D1 = {⟨Alfred⟩, ⟨Bill⟩, ⟨Corinne⟩}

D2 = {⟨Alfred, Alfred⟩, ⟨Alfred, Bill⟩, ⟨Alfred, Corinne⟩, ⟨Bill, Alfred⟩, ⟨Bill, Bill⟩, ⟨Bill, Corinne⟩, ⟨Corinne, Alfred⟩, ⟨Corinne, Bill⟩, ⟨Corinne, Corinne⟩}

[Sx: x is short]

d(Sx) = ⟨ { ⟨Alfred⟩, ⟨Bill⟩}, {⟨Corinne⟩} ⟩

[in other words, the extension of S is Alfred and Bill, and the anti-extension is Corinne]

[Lxy: x loves y]

d(Lxy) = ⟨ { ⟨Bill, Corinne⟩, ⟨Corinne, Alfred⟩}, { ⟨Alfred, Alfred⟩, ⟨Alfred, Bill⟩, ⟨Alfred, Corinne⟩, ⟨Bill, Alfred⟩, ⟨Bill, Bill⟩, ⟨Corinne, Bill⟩, ⟨Corinne, Corinne⟩ } ⟩

[In other words, the extension of L is {⟨Bill, Corinne⟩, ⟨Corinne, Alfred⟩} and the anti-extension is all the rest.]

Now what about the part reading

Ep Ap = Dn

? For predicate S, it is D1 and for predicate L, it is D2. We said that the union of sets is set containing all the members in total from two sets. So let us fill out the values:

ES AS = D1

{⟨Alfred⟩, ⟨Bill⟩} ∪ {⟨Corinne⟩} = {⟨Alfred⟩, ⟨Bill⟩, ⟨Corinne⟩}

The next part:

The relation between formulas and truth values is now relative to an evaluation of the free variables, s. In the atomic case, this is defined by the clauses:

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

(363)

Let us consider a sort of example based somewhat on the ideas in section 11.2 of Priest’s In Contradiction.

As I write, my pen is touching the paper. As I come to the end of a word I lift it off. At one time it is on; at another it is off (that is, not on). Since the motion is continuous, there must be an instant at which the pen leaves the paper. At that instant, is it on the paper or off?

(Priest, In Contradiction, p.160)

So suppose our domain has one object, the pen, or “e”. And suppose we have one predicate, O, for “is on the paper” or the like. Let us set this up to fill out the formulations.

D = {pen}

d(e) = {pen}

[O: “is on the paper”]

d(O) = {⟨pen⟩}

(I am not sure here if “at the instant of change” belongs to the pen, and thus something like:

D = {pen at t1}

or if it belongs to the predicate and thus something like:

[O: “is on the paper at t1”]

or if it is somehow exterior to these two terms and added by other means in other formulations. I will put that matter aside for the time being.) First take a classical situation where we think that at the instant of change, it is still on the paper. Then O’s extension would be {e} and its anti-extension would I think be an empty set, because there are no objects in the domain that do not hold for the predicate. So let us evaluate the truth of the formula saying that the pen is on the paper.

D = {pen}

d(e) = {pen}

[O: “is on the paper”]

d(O) = {⟨pen⟩}

EO = {⟨pen⟩}

AO = {⟨∅⟩}

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

Rs(Oe, 1) iff ⟨d(e)⟩ ∈ EO

Rs(Oe, 0) iff ⟨d(e)⟩ ∈ EA

Rs(Oe, 1) iff ⟨pen⟩ ∈ {⟨pen⟩}

Rs(Oe, 0) iff ⟨pen⟩ ∈ {⟨∅⟩}

Rs(Oe, 1)

But now suppose instead we want to take the paraconsistent view that it is both on and off the paper at the instant of change. The extension of the predicate would still include the pen, because “is on the paper” still holds for it. But I would think that the anti-extension would include the pen too. For, since it is also off the paper, in light of that fact, we would also need to say that the predicate as well does not hold for the pen. And so the pen would be in both the extension and anti-extension of predicate O, even though in a classical system, this is not permissible. I would think then the evaluation would go as follows:

D = {pen}

d(e) = {pen}

[O: “is on the paper”]

d(O) = {⟨pen⟩}

EO = {⟨pen⟩}

AO = {⟨pen⟩}

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

Rs(Oe, 1) iff ⟨d(e)⟩ ∈ EO

Rs(Oe, 0) iff ⟨d(e)⟩ ∈ EA

Rs(Oe, 1) iff ⟨pen⟩ ∈ {⟨pen⟩}

Rs(Oe, 0) iff ⟨pen⟩ ∈ {⟨pen⟩}

Rs(Oe, 1)

Rs(Oe, 0)

Let us continue with the current Priest text:

Truth values (relative to s) are then extended to all sentences. The recursive clauses for the connectives are the same as in the propositional case (relativised to s).

(363)

So, I would assume that, for our above example:

Rs(Oe, 1)

RsOe, 0)

The next part on quantifiers is complicated:

For the quantifiers, if F is a map from D into {0,1} such that Rs(v/a)(α, F(a)) for all aD, let us say that F tracks α. Then:

Rs(∀vα, z) iff ∃F(F tracks α & z=⊗{F(a); aD})

Rs(∃vα, z) iff ∃F(F tracks α & z=⊕{F(a); aD})

(363)

So let us go part-by-part.

F is a map from D into {0,1}

By being a “map,” I am assuming that F is a function or at least that it operates like one. So it will assign the members of D either a 1 or a 0. For the part reading:

Rs(v/a)(α, F(a))

let us first just suppose for the moment that we have:

R(α, F(a))

Here we have formula alpha (α) and constant/domain-item a. F(a) is a 1 or a 0. So the R relation in R(α, F(a)) assigns a 1 or a 0 to alpha, by means of function F, which assigns a 1 or a 0 to a. Let us look at the actual formulation now.

F is a map from D into {0,1} such that Rs(v/a)(α, F(a)) for all aD

This I think is saying that we are making the relational truth assignment relativized to s (that is, given the particular variable substitution that s happens to make in some certain case), where function s substitutes each member of the domain for the variable, and the value for each substitution is determined by how F assigns a 1 or a 0 for that given substitution. (This all seems to imply that the F function is defined such that it assigns a 1 when the formula’s predicate should be holding for that particular item and a 0 when the predicate should not be holding.) Now, insofar as F makes such assignments (that are in accordance with the intended interpretations of the predicates), then we say that F tracks alpha. Now with that terminology being defined, we then evaluate formulas with universal and existential quantification in the following way:

Rs(∀vα, z) iff ∃F(F tracks α & z=⊗{F(a); aD})

Rs(∃vα, z) iff ∃F(F tracks α & z=⊕{F(a); aD})

So again the formulations seem to be made general enough that they function to correctly assign 1 or 0, even though there is no mention of those particular values. This will be easier to unpack if we arbitrarily assign to z either 1 or 0. So let’s say:

Rs(∀vα, 1) iff ∃F(F tracks α & 1=⊗{F(a); aD})

Rs(∀vα, 0) iff ∃F(F tracks α & 0=⊗{F(a); aD})

So, supposing that the v here means a variable in formula alpha, these seem to be saying the following. We relate a formula with a universally quantified variable to 1 only if there is a function that assigns to each substitution for the variable either a 1 or 0 (in such a way that the truth-value assigned to the particular substitution corresponds to the truth or falsity of the formula after the substitution is made) and also if the generalized conjunction function yields a 1 when performed on all those truth-value assignments for each substitution. So let us take an Agler-like example that we set up in section 1.1.1 (and note for extension and anti-extension I am using domain items, but perhaps I should be using constant letters standing for those items):

D = {Alfred, Bill, Corinne}

C = {a, b, c}

d(a) = Alfred

d(b) = Bill

d(c) = Corinne

[Sx: x is short]

d(Sx) = {⟨Alfred⟩, ⟨Bill⟩}

ES = {⟨Alfred⟩, ⟨Bill⟩}

AS = {⟨Corinne⟩}

Rs(∀vS, ???)

Rs(∀vα, 1) iff ∃F(F tracks α & 1=⊗{F(a); aD})

Rs(∀vα, 0) iff ∃F(F tracks α & 0=⊗{F(a); aD})

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

F-tracking instance 1:

Rs1(Sa, 1) iff ⟨d(a)⟩ ∈ ES

Rs1(Sa, o) iff ⟨d(a)⟩ ∈ AS

Rs1(Sa, 1) iff ⟨Alfred⟩ ∈ {⟨Alfred⟩, ⟨Bill⟩}

Rs1(Sa, o) iff ⟨Alfred⟩ ∈ {⟨Corinne⟩}

Rs1(Sa, 1)

F-tracking instance 2:

Rs2(Sb, 1) iff ⟨d(b)⟩ ∈ ES

Rs2(Sb, o) iff ⟨d(b)⟩ ∈ AS

Rs2(Sb, 1) iff ⟨Bill⟩ ∈ {⟨Alfred⟩, ⟨Bill⟩}

Rs2(Sb, o) iff ⟨Bill⟩ ∈ {⟨Corinne⟩}

Rs2(Sb, 1)

F-tracking instance 3:

Rs3(Sc, 1) iff ⟨d(c)⟩ ∈ ES

Rs3(Sc, o) iff ⟨d(c)⟩ ∈ AS

Rs3(Sc, 1) iff ⟨Corinne⟩ ∈ {⟨Alfred⟩, ⟨Bill⟩}

Rs3(Sc, o) iff ⟨Corinne⟩ ∈ {⟨Corinne⟩}

Rs3(Sc, 0)

Rs(∀vS, 1) iff ∃F(F tracks α & 1=⊗{F(a); aD})

Rs((∀vS, 0) iff ∃F(F tracks α & 0=⊗{F(a); aD})

{F(a); aD}

{1, 1, 0}

X=0 iff 0 ∈ X

[and 1 otherwise]

⊗{1, 1, 0} = 0

Rs(∀vS, 1) iff ∃F(F tracks α & 1=0)

Rs((∀vS, 0) iff ∃F(F tracks α & 0=0)

Rs(∀vS, 0)

And similarly for the existential quantifier. Validity is defined in the following way:

Validity is defined in terms of truth preservation in all interpretations and evaluations of the variables, as usual.

(363)

(See section 1.1.2 for a discussion of how validity would work, but here only with the relations structures.)  Here is the quotation in full]:

Over recent years, some logicians have suggested that certain sentences, notably, paradoxical ones, may have more than one truth value. Classical semantics are easily modified to accommodate this possibility. Instead of taking an evaluation to be a function, we take it to be a relation, R, between formulas and truth values, such that each formula relates to at least one truth value. In the propositional case, the relation may be taken as defined on atomic formulas in the first instance. It can then be extended to all formulas according to the following simple idea: the truth values of a compound are exactly those that can be obtained by combining the truth values of its components in all possible ways. More precisely:

R(¬α, z) iff ∃x(R(a, x)& z = ⊖x)

R(α ∧ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

R(α ∨ β, z) iff ∃xy(R(a, x)& R(β, y)& z= xy)

Validity is defined in terms of truth preservation:

Σ ⊨ α iff for all v, iff R(β, 1) for all β ∈ Σ then R(α, 1)

For the first-order case, we now take an interpretation to be a pair, ⟨D, d⟩, where D is a non-empty domain of objects, d assigns each constant a member of the domain, and each n-place predicate, P, a pair ⟨Ep, Ap⟩, (the ex- | tension and anti-extension of P, respectively), such that Ep Ap = Dn. The relation between formulas and truth values is now relative to an evaluation of the free variables, s. In the atomic case, this is defined by the clauses:

Rs(Pt1 ... tn, 1) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ep

Rs(Pt1 ... tn, o) iff ⟨f(t1), ..., f(tn)⟩ ∈ Ap

Truth values (relative to s) are then extended to all sentences. The recursive clauses for the connectives are the same as in the propositional case (relativised to s). For the quantifiers, if F is a map from D into {0,1} such that Rs(v/a)(α, F(a)) for all aD, let us say that F tracks α. Then:

Rs(∀vα, z) iff ∃F(F tracks α & z=⊗{F(a); aD})

Rs(∃vα, z) iff ∃F(F tracks α & z=⊕{F(a); aD})

Validity is defined in terms of truth preservation in all interpretations and evaluations of the variables, as usual.

(362-363)

[contents]

1.2.2

[Familiar Formulations for Evaluating Complex Formulas. LP. Classical Predicates. LP as a Proper Sub-Logic of Classical.]

[The rules for evaluating formulas that are made more complex by connectives or quantifiers are as you would expect for a classical situation, only now using a truth-evaluating relation. These semantics are a version of the paraconsistent logic LP. An interpretation of a predicate is classical if the intersection of its extension and anti-extension is an empty set. LP is a proper sub-logic of classical logic, and all the logical truths of LP are those of classical logic.]

Priest then provides a more familiar set of formulations for the truth conditions for the connectives and quantifiers, which can be directly derived from the above more general formulations.

Rs(¬α, 1) iff Rs(α, 0)

Rs(¬α, 0) iff Rs(α, 1)

Rs(α ∧ β, 1) iff Rs(α, 1) and Rs(β, 1)

Rs(α ∧ β, 0) iff Rs(α, 0) or Rs(β, 0)

Rs(α ∨ β, 1) iff Rs(α, 1) or Rs(β, 1)

Rs(α ∨ β, 0) iff Rs(α, 1) and Rs(β, 1)

Rs(∀vα, 1) iff for every aD, Rs(v/a)(α, 1)

Rs(∀vα, 0) iff for some aD, Rs(v/a)(α, 0)

Rs(∃vα, 1) iff for some aD, Rs(v/a)(α, 1)

Rs(∃vα, o) iff for all aD, Rs(v/a)(α, 0)

(363)

Priest notes that what we have just outlined above provides one version to the semantics of LP, which is a paraconsistent logic (see section 7.3 of Introduction to Non-Classical Logic.) Priest then writes:

If the interpretation of a predicate, P, is such that EP AP = ∅, I will call it classical.

(364)

[First recall what we noted above about the intersection, from the brief summary to the post on section 9.5 of Suppes’ Introduction to Logic:

If we find all the members shared in common between two sets, we are finding their intersection ():

(x)(x A B xA & xB)

(based on Suppes, Introduction to Logic, p.184)

Suppes has us consider the following two sets: A the set of all men and B the set of all animals that weigh more than ten tons (184). He writes,

In this case we notice that A B is the empty set (despite the fact that A ≠ Λ, and B ≠ Λ, since some whales weigh more than ten tons). When A B = Λ, we say that A and B are mutually exclusive.
(Suppes, Introduction to Logic, 184)

No also recall from section 1.2.2 above the example of the instant of change, where we first considered a classical view that would say the pen is either on the paper or off the paper, but never both or neither. For the notion that the pen is on the paper at the moment of change, we assigned the extension and anti-extension as:

EO = {⟨pen⟩}

AO = {⟨∅⟩}

I would think that the intersection of these sets is the empty set, because they share no members in common, and thus these sets are mutually exclusive.

EO = {⟨pen⟩}

AO = {⟨∅⟩}

If the interpretation of a predicate, P, is such that EP AP = ∅, I will call it classical.

EO AO = ∅

[Classical]

The intersection of the two sets in the paraconsistent interpretation (the pen is both on and not on the paper) would not be an empty set, I think, but rather contain one member in both, namely, the pen. Next he writes:

As is easy to see, if all the predicates of an interpretation are classical, then the interpretation is essentially a functional interpretation of classical logic. Hence, LP is a sub-logic of classical logic. It is, however, a proper sub-logic, since it is paraconsistent.

(364)

Unfortunately I do not grasp this part. I do not know what it means to be a sub-logic. Does LP prove fewer formulas (have fewer tautologies)? If so, is that what makes it a sub-logic? If not, what makes it a sub-logic? His final point is:

I note, without proof, that the logical truths of LP are exactly those of classical logic.

(364)

He cites chapter 5 of In Contradiction. I have not read it yet. But it seems that possibly a good place to look for such a proof might be Section 5.5. Now finally here is a quotation of the whole paragraph:]

It is easy to check that the recursive truth conditions may be put in an equivalent, but slightly more familiar, form as follows:

Rs(¬α, 1) iff Rs(α, 0)

Rs(¬α, 0) iff Rs(α, 1)

Rs(α ∧ β, 1) iff Rs(α, 1) and Rs(β, 1)

Rs(α ∧ β, 0) iff Rs(α, 0) or Rs(β, 0)

Rs(α ∨ β, 1) iff Rs(α, 1) or Rs(β, 1)

Rs(α ∨ β, 0) iff Rs(α, 1) and Rs(β, 1)

Rs(∀vα, 1) iff for every aD, Rs(v/a)(α, 1)

Rs(∀vα, 0) iff for some aD, Rs(v/a)(α, 0)

Rs(∃vα, 1) iff for some aD, Rs(v/a)(α, 1)

Rs(∃vα, o) iff for all aD, Rs(v/a)(α, 0)

|

The above semantics are one version of the semantics for the paraconsistent logic LP.1 If the interpretation of a predicate, P, is such that EP AP = ∅, I will call it classical. As is easy to see, if all the predicates of an interpretation are classical, then the interpretation is essentially a functional interpretation of classical logic. Hence, LP is a sub-logic of classical logic. It is, however, a proper sub-logic, since it is paraconsistent. I note, without proof, that the logical truths of LP are exactly those of classical logic.2

(363-364)

1. They are formulated this way in Priest (1984). Note that if we allow the possibility that sentences may relate to no truth value as well, but leave everything else the same, we get a logic with truth value gaps satisfying the Fregean principle: gap-in/gap-out. In particular, then, the alternative truth conditions just given are no longer equivalent. If we preserve these truth conditions instead, we obtain First Degree Entailment.

2. See Priest (1987), ch. 5.

(364)

[contents]

1.2.3

[Identity and Substitutivity in LP. ]

[Identity in this relation-based truth evaluatory LP semantics is understood as the set of couples (from the domain) whose first and second term are (taken to be) the same. We cannot however have any non-identical couplings be excluded from the identity predicate/relation’s anti-extension, because in our LP semantics there can be no items that are neither identical nor non-identical to another item. (However, we can have pairings of visually identical terms that are in both the extension and anti-extension of the identity predicate/relation, meaning that it is both identical and non-identical with itself.) Substitutivity of identicals holds in this semantics.]

[This next part on identity I have trouble following, so I want to look first at similar passages in Priest’s Introduction to Non-Classical Logic section 26.2 (not yet summarized):

We now add identity to the language, starting with the relational semantics. In an interpretation, ⟨D, v⟩, vε (=) = {⟨d, d⟩: ∈ D}. The anti-extension of = can be any subset of D2.

(Priest’s Introduction to Non-Classical Logic, p.486)

So we have our D. Let us return to our Aglerian example:

D = {Alfred, Bill, Corinne}

C = {a, b, c}

d(a) = Alfred

d(b) = Bill

d(c) = Corinne

D1 = {⟨Alfred⟩, ⟨Bill⟩, ⟨Corinne⟩}

D2 = {⟨Alfred, Alfred⟩, ⟨Alfred, Bill⟩, ⟨Alfred, Corinne⟩, ⟨Bill, Alfred⟩, ⟨Bill, Bill⟩, ⟨Bill, Corinne⟩, ⟨Corinne, Alfred⟩, ⟨Corinne, Bill⟩, ⟨Corinne, Corinne⟩}

I am not exactly sure how to write the equality component here, but let us think of the extension for the equality relation/predicate (or whatever it is) as:

Evε(=) = {⟨Alfred, Alfred⟩, ⟨Bill, Bill⟩, ⟨Corinne, Corinne⟩}

What is the anti-extension? Under a classical mode of thinking, it would seem to be all those cases where the first term is not the same as the second. So:

Avε(=) = {⟨Alfred, Bill⟩, ⟨Alfred, Corinne⟩, ⟨Bill, Alfred⟩, ⟨Bill, Corinne⟩, ⟨Corinne, Alfred⟩, ⟨Corinne, Bill⟩}

So here, the anti-extension would be a subset of D2.  Now, in the text in Introduction to Non-Classical Logic, it reads: “The anti-extension of = can be any subset of D2.” The fact that it can be “any” subset suggests that in First Degree Entailment, we can have an interpretation where one or more of the items that are in the classical anti-extension are instead missing from it. So suppose the above set is lacking ⟨Alfred, Bill⟩. That would seem to suggest that Alfred and Bill are neither identical (as they are not in the extension of the identity predicate/relation) nor non-identical (as they are not in the anti-extension). With that in mind, let us return to our current text, but first recall the notion of the union of sets from Suppes’ Introduction to Logic section 9.5. (the following comes from the “Brief summary” of that post):

The set containing all the members in total from two sets is their union ():

(x)(x ABx A xB)

(Suppes, Introduction to Logic, pp.184-185)

Now back to our current Priest text:

To extend the machinery to identity is not difficult. We simply require identity to be a logical (semi-)constant: E= is always {⟨x, x⟩; x D}. (A= can be anything, except, of course, that E= A= = D2.)

This is very similar to the First Degree Entailment passages, but I found this part to be bit tricky:

A= can be anything, except, of course, that E= A= = D2.

I am guessing that this is the stipulation that prevents there from being any couples of visibly non-identical terms that are in neither the extension nor the anti-extension. In other words, I am guessing that this stipulates that the union of the extension and the anti-extension be the same as the set of all couples that can be made from the domain. This would seem to exclude the possibility that there are any pairings that are neither identical nor are non-identical. But what about the possibility that for example ⟨Alfred, Alfred⟩ is in both the extension and anti-extension of the identity relation/predicate? Suppose that is so, and take our definition of set union again:

The set containing all the members in total from two sets is their union ():

(x)(x ABx A xB)

(based on but not quoting Suppes, Introduction to Logic, pp.184-185)

So as far as I can tell, ⟨Alfred, Alfred⟩ for example can be in both the extension and anti-extension of the identity relation/predicate. That would seem to suggest that Alfred is both identical to himself and not identical to himself. Priest then writes:

The logical truths in the language with identity are, again, exactly the same as those of classical logic. I note also that the semantics verify the standard substitutivity principle for identity: a = b, α(a) ⊨ α(b).

(364)

As I did not understand how to compare systems in terms of their logical truths, I will leave this part also to be filled in later after I learn it. (He again cites In Contradiction Chapter 5, so I would think that it is related to the above mention.) The last point is that these LP semantics allow for the  substitutivity of items that have been designated as equal/identical. But as for making sense of this part’s more technical features, I am not sure. What would need to take place I think is that for all evaluations that make a=b and α(a) true (and maybe specifically make them ‘at least true’ but I am not sure), then α(b) will be true too. I do not know what α(a) and α(b) mean exactly, but I would think that they mean a formula where a certain variable is substituted by either a or b. So if constant a is designated as being identical to b, then all valuations making that equality true along with making the substitution of a for some variable in some particular formula true will also make that formula true were it substituted by b.]

To extend the machinery to identity is not difficult. We simply require identity to be a logical (semi-)constant: E= is always {⟨x, x⟩; x D}. (A= can be anything, except, of course, that E= A= = D2.) The logical truths in the language with identity are, again, exactly the same as those of classical logic. I note also that the semantics verify the standard substitutivity principle for identity: a = b, α(a) ⊨ α(b).

(364)

[contents]

From:

Priest, Graham. 1995. “Multiple Denotation, Ambiguity, and the Strange Case of the Missing Amoeba.” Logique et Analyse 38: 361-373.

Available at his website:

And at JSTOR:

Also cited:

Agler, David. 2013. Symbolic Logic: Syntax, Semantics, and Proof. New York: Rowman & Littlefield.

Grandy, Richard. 1979 [first published 1977]. Advanced Logic for Applications. Dordrecht: Reidel.

Kaynak, Baran. 2011. “Classical Relations and Fuzzy Relations.” Slide presentation. Available at:

https://www.slideshare.net/barankaynak/classical-relations-and-fuzzy-relations

Slide 8:

https://www.slideshare.net/barankaynak/classical-relations-and-fuzzy-relations/8

Priest, Graham. 2006 [first edition published 1987]. In Contradiction: A Study of the Transconsistent, Expanded Edition. Oxford/New York: Oxford University.

Priest, Graham. 2008 . An Introduction to Non-Classical Logic: From If to Is, 2nd edn. Cambridge: Cambridge University.

Priest, Graham. 2014. One: Being an Investigation into the Unity of Reality and of its Parts, including the Singular Object which is Nothingness. Oxford: Oxford University.

Schuller, Frederic P. Class Video Lecture 1 . “Logic of Propositions and Predicates.” From his course, “Geometric Anatomy of Theoretical Physics / Geometrische Anatomie der Theoretischen Physik” at the Institute for Quantum Gravity of the University of Erlangen-Nürnberg. Available at youtube at:

Class 1: “Logic of Propositions and Predicates” :
https://youtu.be/aEJpQ3d1ltA

28 minute mark for the conditional’s evaluation scheme and its relation to ex falso quodlibet:

https://youtu.be/V49i_LM8B0E?t=28m

Discussed on this blog here:

http://piratesandrevolutionaries.blogspot.com/2016/03/the-axioms-of-axiomatic-set-theory-from.html

Suppes, Patrick. 1957. Introduction to Logic. New York: Van Nostrand Reinhold / Litton Educational.

.