3 May 2014

Russell, Ch.39 of Principles of Mathematics, ‘The Infinitesimal Calculus’, summary notes

 

by Corry Shores
[Search Blog Here. Index-tags are found on the bottom of the left column.]

[Central Entry Directory]

[Bertrand Russell, entry directory]

[Other entries in the Russell Principles of Mathematics, series]

[The following is summary and quotation. All boldface, underlining, and bracketed commentary are mine. Please see the original text, as I did not follow it closely. Proofreading is incomplete, so mistakes are still present.]

 


 

Bertrand Russell


Principles of Mathematics


Part 5: Infinity and Continuity


Ch.39: The Infinitesimal Calculus





Brief Summary

Leibniz’ infinitesimal calculus used a concept of the infinitely small (the infinitesimal quantity). Russell explains how differential and integral calculus now function with the concept of limit rather than the concept of infinitesimal.

 



Summary

 

§303


‘Infinitesimal calculus’ refers to differential and integral calculus, however “there is no allusion to, or implication of, the infinitesimal in any part of this branch of mathematics.” [330]


Leibniz was its inventor, but he considered it to be more practically applicably than metaphysically truthful. “He appears to have held that, if metaphysical subtleties are left aside, the Calculus is only approximate, but is justified practically by the fact that the errors to which it gives rise are less than those of observation”. [330]


But because Leibniz believed in the actual infinitesimal, he was unable to see that calculus rests on the doctrine of limits. Newton’s fluxions are closer to this truer foundation.

When he was thinking of Dynamics, his belief in the actual infinitesimal hindered him from discovering that the Calculus rests on the doctrine of limits, and made him regard his dx and dy as neither zero, nor finite, nor mathematical fictions, but as really representing the units to which, in his philosophy, infinite division was supposed to lead. And in his mathematical expositions of the subject, he avoided giving careful proofs, contenting himself with the enumeration of rules. At other times, it is true, he definitely rejects infinitesimals as philosophically valid; but he failed to show how, without the use of infinitesimals, the results obtained by means of the Calculus could yet be exact, and | not approximate. In this respect, Newton is preferable to Leibniz: his Lemmas give the true foundation of the Calculus in the doctrine of limits, and, assuming the continuity of space and time in Cantor’s sense, they give valid proofs of its rules so far as spatio-temporal magnitudes are concerned. [330-331]


Leibniz’ error has misled philosophers and mathematicians from his time to Weierstrass.

it is at any rate certain that, in his first published account of the Calculus, he defined the differential coefficient by means of the tangent to a curve. And by his emphasis on the infinitesimal, he gave a wrong direction to speculation as to the Calculus, which misled all mathematicians before Weierstrass (with the exception, perhaps, of De Morgan), and all philosophers down to the present day. It is only in the last thirty or forty years that mathematicians have provided the requisite mathematical foundations for a philosophy of the Calculus. [331]



§304


“The differential coefficient depends essentially upon the notion of a continuous function of a continuous variable”. [331]


In §254 we noted that a function relates the elements of one set to those of another, usually in an order-preserving way. [Also see Edwards and Penney’s account of functions here.]
And in §277 we defined a continuum as  a dense series of terms whose values are all definable by means of limits contained with in it. [Russell says we examined continuous variables in this chapter. There is no use of this term, just continuum and continuous series, which might therefore be equivalent.] If the function is one-valued and ordered correlatively with a continuous variable, then the function is continuous. [332d]. But if the function has an order indepent of correlation, it could possible be that the series obtained in its correlation is not continuous. When the correlation does produce a continuous series in some interval, then the function is continuous in that interval. [From the informal and formal definitions, it seems that a function is continuous at a point when the limits to either side of that point have equal value. If we were thinking in infinitesimal terms, if we were to move infinitesimally to the right or left of the point, it would be the same value (except for the infinitesimal, inassignable difference). But it is discontinuous when the side-limits are different. As  Mr. Flatcher writes “The graph of a continuous function has no holes, jumps, or gaps. Think of a continuous function as one that you can graph without ever lifting your pencil.” (Mr. Flatcher / Flatchermatics) Consider for example this function.

f(x) = \begin{cases}
  x^2         & \mbox{ for } x < 1 \\
  0           & \mbox{ for } x = 1 \\
  2 - (x-1)^2 & \mbox{ for } x > 1
\end{cases}

At x = 1, the limits on either of its sides should be 0. However, as we can see, the y values are much different from zero.

http://upload.wikimedia.org/wikipedia/commons/e/e6/Discontinuity_jump.eps.png

(Image and function from ‘Classification of Discontinuities’, wikipedia)

As you can see, at the limit right before x = 1, the y value is greater than zero, and at the limit right after, the y value is even greater than that. Below we have an animation showing a transition from continuity to discontinuity. The caption reads: “A sequence of continuous functions fn(x) whose (pointwise) limit function f(x) is discontinuous. The convergence is not uniform.”

wiki.continuous to discontinuous function. Uniform_continuity_animation

(Animated diagram and above caption from ‘Continuous function’, wikipedia)

]

If the function is one-valued, and is only ordered by correlation with the variable, then, when the variable is continuous, there is no sense in asking whether the function is continuous; for such a series by correlation is always ordinally similar to its | prototype. But when, as where the variable and the field of the function are both classes of numbers, the function has an order independent of correlation, it may or may not happen that the values of the function, in the order obtained by correlation, form a continuous series in the independent order. When they do so in any interval, the function is said to be continuous in that interval. The precise definitions of continuous and discontinuous functions, where both x and f(x) are numerical, are given by Dini as follows. The independent variable x is considered to consist of the real numbers, or of all the real numbers in a certain interval; f(x), in the interval considered, is to be one-valued, even at the end-points of the interval, and is to be also composed of real numbers. We then have the following definitions, the function being defined for the interval between α and β, and ɑ being some real number in this interval.

“We call f(x) continuous for x = ɑ, or in the point ɑ, in which it has the value f(ɑ), if for every positive number σ, different from 0, but as small as we please, there exists a positive number ε, different from 0, such that, for all values of δ which are numerically less than ε, the difference f(ɑ + δ) − f(ɑ) is numerically less than σ. In other words, f(x) is continuous in the point x = ɑ, where it has the value f(ɑ), if the limit of its values to the right and left of a is the same, and equal to f(ɑ).”

“Again, f(x) is discontinuous for x = ɑ, if, for any positive value of σ, there is no corresponding positive value of ε such that, for all values of δ which are numerically less than ε, f(ɑ + δ) − f(ɑ) is always less than σ; in other words, f(x) is discontinuous for x = ɑ, when the values f(a + h) of f(x) to the right of a, and the values f(ɑ − h) of f(x) to the left of ɑ, the one and the other, have no determinate limits, or, if they have such, these are different on the two sides of ɑ; or, if they are the same, they differ from the value f(ɑ), which the function has in the point ɑ.” [331-332]


But the limit of a function is slightly different than the limit in general (of series) that we have discussed so far. Russell defines the limit in this way. [Put in simple terms, it seems that the limit is the value most immediate to the point, were we using infinitesimal terms.]

A function of a perfectly general kind will have no limit as it approaches any given point. In order that it should have a limit as x approaches a from the left, it is necessary and sufficient that, if any number ε be mentioned, any two values of f(x), when x is sufficiently near to a, but | less than a, will differ by less than ε; in popular language, the value of the function does not make any sudden jumps as x approaches a from the left. Under similar circumstances, f(x) will have a limit as it approaches a from the right. But these two limits, even when both exist, need not be equal either to each other or to f(ɑ), the value of the function when x = ɑ. The precise condition for a determinate finite limit may be thus stated:

“In order that the values of y to the right or left of a finite number a (for instance to the right) should have a determinate and finite limit, it is necessary and sufficient that, for every arbitrarily small positive number σ, there should be a positive number ε, such that the difference yɑ + ε − yɑ+ δ between the value yɑ + ε of y for x = a + ε, and the value yɑ + δ, which corresponds to the value a + δ of x, should be numerically less than σ, for every δ which is greater than 0 and less than ε.” It is possible, instead of thus defining the limit of a function, and then discussing whether it exists, to define generally a whole class of limits. In this method, a number z belongs to the class of limits of y for x = ɑ, if, within any interval containing ɑ, however small, y will approach nearer to z than by any given difference. Thus, for example, sin 1/x, as x approaches zero, will take every value from −1 to +1 (both inclusive) in every finite interval containing zero, however small. Thus the interval from −1 to +1 forms, in this case, the class of limits for x = 0. This method has the advantage that the class of limits always exists. It is then easy to define the limit as the only member of the class of limits, in case this class should happen to have only one member. This method seems at once simpler and more general.
[332-333]



§305


Russell will now discuss the derivative or differential coefficient of the function. [To better grasp Russell’s example, we will draw from our summary of one of David Jerrison’s class lectures on the differential. We will find an equivalent formulation to Russell’s so that we can make what he is saying more concrete. So first let’s understand the formulation. Consider a curve with point P.

The line has a different slope (tendency of variation) at each point. We will ask, what is its slope at x0, or point P? We determine the y value on the basis of the function f(x). And since P = (x,y), then P = (x0,(fx0)). Our calculation will involve looking at the change in x to the change in y (or change in f we might say).

The slope is found at the limit as Δx goes to zero.

We see the coordinates given here:

image

Slope is rise-over-run, or Δy / Δx. So

m = (y2 – y1) / (x2 – x1)

or in our case,

m = (f2 – f1) / (x2 – x1)

Notice in the above diagram that the y values in P and Q are: f(x0) and (f(x0 + Δx). For (x2 – x1) we only need Δx. So if we substitute these values into the slope formula, we have:

(f(x0 + Δx) – f(x0)) / Δx

The derivative we will denote as f’(x0). Thus

We call that formulation “the difference quotient”.

image

Now let’s use a specific function.

image

f(x) = 1 / x

image

We want to find the derivative at x0, and the dotted line is the tangent whose slope we seek. So we need to find Δf / Δx. The formula for Δf was (f(x0 + Δx) – f(x0)). When we plug in our function, we (multiplicatively) invert the x values, that is, put a one over them, hence we obtain [18]:

When we remove the embedded fractions (by dropping the top couched-denominators to the entire bottom denominator), we get:

[D14.MIT.fill.1.14.jpg]

As you can see, 1 / Δx is common to both parts. When we factor it out, we get:

[D15.MIT.fill.2.15.jpg]

The subtracted parts need a common denominator for us to simplify them. To give them both a common denominator, we multiply each by the other’s denominate set over itself (thus equaling 1).

[D16.MIT.fill.3.16.jpg]

Let’s combine these figures to get:

[D17.MIT.fill.4.1.jpg]

We can now subtract the terms:

[D18.MIT.fill.5.18.jpg]

We then distribute the negative in the right side of the numerator to make x0 – x0 –Δx, thereby leaving –Δx; hence:

[D18.5.MIT.PDF.first.after.own.fill.jpg]

Since both sides share Δx inversely, we can cancel them, leaving us with 1/1 on the left side, which can as well be eliminated, and remaining on the left is:

[D18.7.MIT.PDF.second.after.own.fill.jpg]

The last step is to take the limit, as delta tends to zero, and substitute zero for Δx. We can do this now, because before the numerator and denominator gave us number divided by zero, which is undefined. But through algebraic operations, we were able to make the Δx negate-out of the equation without leaving a zero in the denomenator. Thus we now substitute-in the limit, that is, make Δx equal zero, leaving us with:

[D18.9.MIT.PDF.third.after.own.fill.jpg]

Let’s put all of this together into one large formulation:

image

image

image

We then compare with our chart. This is negative, and likewise the slope is negative. Also, as xo goes to infinity, so as x moves to the right, it becomes less steep:

image

As we will see, Russell uses a very similar formulation, except with δ instead of our Δx]

 

If f(x) be a function which is finite and continuous at the point x, then it may happen that the fraction

{f(x + δ) − f(x)}/δ

has a definite limit as δ approaches to zero. If this does happen, the limit is denoted by f '(x), and is called the derivative or differential of f(x) in the point x. If, that is to say, there be some number z such that, given any number ε however small, if δ be any number less than some number η, but positive, then {f(x ± δ) − f(x)}/ ± δ differs from z by less than ε, then z is the derivative of f(x) in the point x. If the limit in question does not exist, then f(x) has no derivative at the point x. If f(x) be not continuous at this point, the limit does not exist; if f(x) be continuous, the limit may or may not exist.
[334]


§306

[Russell will say that the notion of the infinitesimal was not used in this definition. This probably results from the notion of the limit as standing outside the series, and the values approaching can only ever get closer and closer with no final value. I challenge this view, because it implies the value of the interval between the limit and the series that approaches it is finite. Let’s take this idea that between any two values is a middle value, infinitely. So a series of diminishing values perhaps could be something like 1/1, 1/2, 1/4, 1/8. What is important regarding Cantor’s infinity is that the cardinal value for infinity  α0 is not among the natural numbers. It is the limit to which their law of genesis implicitly strives toward but does not precisely attain. But so long as the interval is finite, which Russell insists it must be, then does it fulfill the definition for “given any number ε however small”? It seems here the idea is not “the interval is so small it is infinitely small and thus continuous with zero” but rather “the interval is very small but still finite, yet it is close enough to zero that we can substitute one for the other.” Perhaps this is where the term “arbitrarily” small comes from. Is it strange that this ‘fudging’ sort of operation where we exchange a finite value for zero is considered more precise than when we think of this value as infinitely small, especially since in both cases we assume that there are an infinity of subdivisions? If intervals really are infinitely sudividable, why is it so hard to conceive of the intervals between them as being infinitely small? If there were not infinitely small, then they would be finitely small, and an infinity of them would compose an infinitely large interval. But an infinity of infinitely small values could conceivably compose a finite interval (for we would multiply infinity times one over infinity, equaling a finite unit after the infinities cancel.)] Russell emphasizes that the fact he has defined the derivative using limits and not infinitesimals is philosophically the most important part of his treatment on calculus. [The philosophical implication of this might be that the law of continuity does not hold, and thus change is not a matter paradoxically co-given contrary states. Also, change or motion would be ‘at-at’; infinitesimal intervals suggest ‘between-between’ or ‘at-at plus at-at’. I think Russell insists on this philosophical point for logical reasons. The problem with the infinitesimal calculus and its law of continuity is that it is a dialetheia, a true contradiction, and Russell will not allow exceptions to his rigid logic of perfect self-consistency. Because of the advances in dialetheic logic, it is no longer illogical to say that change is inherently paradoxical. And on account of the invention of non-standard analysis, it is no longer conceptually sloppy to use the notion of the infinitesimal in calculus. So here too is equally my greatest point of emphasis and the purpose for all these mathematical technicalities: despite Russell’s insistence, we can have a dialetheic between-between theory of motion, and it will not have such oddities like we find in Russell’s account, such as an infinity of finite intervals composing a finite interval and not an infinite one, and a moving object always being in no state other than rest, just rest in different places at different times.]

The only point which it is important to notice at present is, that there is no implication of the infinitesimal in this definition. The number δ is always finite, and in the definition of the limit there is nothing to imply the contrary. In fact, {f(x + δ) − f(x)}/δ, regarded as a function of δ, is wholly indeterminate when δ = 0. The limit of a function for a given value of the independent variable is, as we have seen, an entirely different notion from its value for the said value of the independent variable, and the two may or may not be the same number. In the present case, the limit may be definite, but the value for δ = 0 can have no meaning. Thus it is the doctrine of limits that underlies the Calculus, and not any pretended use of the infinitesimal. This is the only point of philosophic importance in the present subject, and it is only to elicit this point that I have dragged the reader through so much mathematics. [383]

[In the above, it seems Russell is absolutely clear that the small interval does not equal 0, but it also does not equal an infinitesimally small value. Rather, it equals a finite value that is as small as you want it to be.]



§307

[In Russell’s description of the definite integral, we divide the interval up into n portions. We find their ‘areas’ or products. Then we want the sum of all such interval areas or products. As we increase n, the sum tends toward a definite limit, which gives us the integral sum.  For more on this operation, see David Jerison’s definite integral class or Edwards & Penney’s section on Riemann sums.]


Just as the derivative of a function is the limit of a fraction, so the definite integral is the limit of a sum. The definite integral may be defined as follows: Let f(x) be a function which is one-valued and finite in the interval α to β (both inclusive). Divide this interval into any n portions by means of the | (n − 1) points x1, x2, . . . xn − 1, and denote by δ1, δ2, . . . δn the n intervals x1 − α, x1 − x2, . . . β − xn − 1 . In each of these intervals, δs, take any one of the values, say f(ζs), which f(x) assumes in this interval, and multiply this value by the interval δs. Now form the sum

image

This sum will always be finite. If now, as n increases, this sum tends to one definite limit, however f(ζs) may be chosen in its interval, and however the intervals be chosen (provided only that all are less than any assigned number for sufficiently great values of n)—then this one limit is called the definite integral of f(x) from α to β . If there is no such limit, f(x) is not integrable from α to β.
[384-385]



§308

 

Russell now explains that neither the concept of the infinitesimal nor of infinity were used in this account of integral calculus. In fact, it is not even a sum [which might be impossible were the terms infinite in number]. Rather, this value is the limit of a sum [to boundary value to which its summing is tending]. But it never reaches that value. [If it did, then that could only be by means of an infinitesimally small increment between the limit and the next value near it. But so long as it never gets there but instead continually gets nearer without arriving, then it is made of many finite units. Imagine a segment that is built by adding 1/2 + 1/4 + 1/8 + etc., it is tending toward the total 1. See the divided square diagram on this page for an illustration (found near the end, under “geometric series that halves each time.”) We don’t need infinitely many to know it is tending to that limit. So we do not need the concepts of infinitesimal and infinity. But if we think that it does actually reach that limit, then it could only do so with an infinity of divisions with the smallest being infinitesimal.]


As in the case of the derivative, there is only one important remark to make about this definition. The definite integral involves neither the infinite nor the infinitesimal, and is itself not a sum, but only and strictly the limit of a sum. All the terms which occur in the sum whose limit is the definite integral are finite, and the sum itself is finite. If we were to suppose the limit actually attained, it is true, the number of intervals would be infinite, and the magnitude of each would be infinitesimal; but in this case, the sum becomes meaningless. Thus the sum must not be regarded as actually attaining its limit. But this is a respect in which series in general agree. Any series which always ascends or always descends and has no last term cannot reach its limit; other infinite series may have a term equal to their limit, but if so, this is a mere accident. The general rule is, that the limit does not belong to the series which it limits; and in the definition of the derivative and the definite integral we have merely another instance of this fact. The so-called infinitesimal calculus, therefore, has nothing to do with the infinitesimal, and has only indirectly to do with the infinite—its connection with the infinite being, that it involves limits, and only infinite series have limits.
[386]

 

 


Sources [unless otherwise notes, all bracket page citations are from]:

Bertrand Russell. Principles of Mathematics. London/New York: Routledge, 2010 [1st published 1903].

 

Otherwise:

Mr. Flatcher. Continuity and Differentiability.
http://fletchmatics.weebly.com/continuity-and-differentiability.html


Wikipedia. ‘Classification of Discontinuities’.
http://en.wikipedia.org/wiki/Classification_of_discontinuities


Wikipedia. ‘Continuous function’.
http://en.wikipedia.org/wiki/Continuous_function

 


 

No comments:

Post a Comment