using CalculusWithJulia
using Plots
plotly()
using Richardson # for extrapolation
using SymPy # for symbolic limits
18 Limits
This section uses the following add-on packages:
An historic problem in the history of math was to find the area under the graph of
There wasn’t a ready-made formula for the area of this shape, as was known for a triangle or a square. However, Archimedes found a method to compute areas enclosed by a parabola and line segments that cross the parabola.
The figure illustrates a means to compute the area bounded by the parabola, the line
This value is
Archimedes used this method to solve a wide range of area problems related to basic geometric shapes, including a more general statement of what we described above.
The
Next, we illustrate how Archimedes approximated
Here Archimedes uses bounds to constrain an unknown value. Had he been able to compute these bounds for larger and larger
Continuing these concepts, Fermat in the 1600s essentially took a limit to find the slope of a tangent line to a polynomial curve. Newton in the late 1600s, exploited the idea in his development of calculus (as did Leibniz). Yet it wasn’t until the 1800s that Bolzano, Cauchy and Weierstrass put the idea on a firm footing.
To make things more precise, we begin by discussing the limit of a univariate function as
Informally, if a limit exists it is the value that
The modern formulation is due to Weierstrass:
We comment on this later.
Cauchy begins his incredibly influential treatise on calculus considering two examples, the limit as
These take the indeterminate forms
which is equivalent to:
This bounds the expression
The above bound comes from this figure, for small
To discuss the case of
x = 1/10000
(1 + x)^(1/x)
2.7181459268249255
A table can show the progression to this value:
f(x) = (1 + x)^(1/x)
xs = [1/10^i for i in 1:5]
[xs f.(xs)]
5×2 Matrix{Float64}:
0.1 2.59374
0.01 2.70481
0.001 2.71692
0.0001 2.71815
1.0e-5 2.71827
This progression can be seen to be increasing. Cauchy, in his treatise, can see this through:
These values are clearly increasing as
with a suitably defined limit.
These two cases illustrate that though the definition of the limit exists, the computation of a limit is generally found by other means and the intuition of the value of the limit can be gained numerically.
18.0.1 Indeterminate forms
First it should be noted that for most of the functions encountered, the concepts of a limit at a typical point
Julia
can help - at times - identify these indeterminate forms, as many such operations produce NaN
. For example:
0/0, Inf/Inf, 0 * Inf, Inf - Inf
(NaN, NaN, NaN, NaN)
However, the values with powers generally do not help, as the IEEE standard has 0^0
evaluating to 1:
0^0, 1^Inf, Inf^0
(1, 1.0, 1.0)
However, this can be unreliable, as floating point issues may mask the true evaluation. However, as a cheap trick it can work. So, the limit as
x = 1
sin(x) / x
0.8414709848078965
But at
x = 0
sin(x) / x
NaN
The above is really just a heuristic. For some functions this is just not true. For example, the floor
function is the function that rounds down to the nearest integer. At integer values there will be a jump (and hence have no limit), even though the function is defined.
18.1 Graphical approaches to limits
Let’s return to the function
f(x) = sin(x)/x
plot(f, -pi/2, pi/2;
seriestype=[:scatter, :line], # show points and line segments
legend=false)
The
We can also verify Euler’s intuition through this graph:
plot(sin, -pi/2, pi/2)
plot!(identity) # the function y = x, like how zero is y = 0
That the two are indistinguishable near
A parametric plot shows the same, we see below the slope at
plot(sin, identity, -pi/2, pi/2) # parametric plot
The graphical approach to limits - plotting
Example
This example illustrates the same limit a different way. Sliding the
Example
Consider now the following limit
Noting that this is a ratio of nice polynomial functions, we first check whether there is anything to do:
f(x) = (x^2 - 5x + 6) / (x^2 + x - 6)
c = 2
f(c)
NaN
The NaN
indicates that this function is indeterminate at
c, delta = 2, 1
plot(f, c - delta, c + delta)
The graph looks “continuous.” In fact, the value
As an aside, we can redefine f
using the “ternary operator”:
f(x) = x == 2.0 ? -0.2 : (x^2 - 5x + 6) / (x^2 + x - 6)
This particular case is a textbook example: one can easily factor
Written in this form, we clearly see that this is the same function as
18.2 Numerical approaches to limits
The investigation of
The basic idea is to create a sequence of
Best, to see by example. Suppose we are asked to investigate
We first define a function and check if there are issues at
f(x) = (sqrt(x) - 5) / (sqrt(x-16) - 3)
f (generic function with 1 method)
c = 25
f(c)
NaN
So yes, an issue of the indeterminate form
hs = [1/10^i for i in 1:8]
8-element Vector{Float64}:
0.1
0.01
0.001
0.0001
1.0e-5
1.0e-6
1.0e-7
1.0e-8
We can add these to
xs = c .+ hs
ys = f.(xs)
8-element Vector{Float64}:
0.6010616008415922
0.6001066157341047
0.6000106661569936
0.6000010666430725
0.6000001065281493
0.6000000122568625
0.5999999946709295
0.6
To visualize, we can put in a table using [xs ys]
notation:
[xs ys]
8×2 Matrix{Float64}:
25.1 0.601062
25.01 0.600107
25.001 0.600011
25.0001 0.600001
25.0 0.6
25.0 0.6
25.0 0.6
25.0 0.6
The
Since limits are defined by the expression .-
sign in c .- hs
):
xs = c .- hs
ys = f.(xs)
[xs ys]
8×2 Matrix{Float64}:
24.9 0.598928
24.99 0.599893
24.999 0.599989
24.9999 0.599999
25.0 0.6
25.0 0.6
25.0 0.6
25.0 0.6
Same story. The numeric evidence supports a limit of
lim
function
The CalculusWithJulia
package provides a convenience function lim(f, c)
to create tables to showcase limits. The dir
keyword can be "+-"
(the default) to show values from both the left and the right; "+"
to only show values to the right of c
; and "-"
to only show values to the left of c
:
For example:
lim(f, c)
25.1 0.6010616008415922 25.01 0.6001066157341047 25.001 0.6000106661569936 25.0001 0.6000010666430725 25.00001 0.6000001065281493 25.000001 0.6000000122568625 ⋮ ⋮ c L? ⋮ ⋮ 24.999999 0.5999999893418593 24.99999 0.5999998930988751 24.9999 0.5999989333095628 24.999 0.5999893328251538 24.99 0.5998932823395853 24.9 0.5989282061387314
The numbers are displayed in decreasing order so the values on the left side of
lim(f, c; dir="-") # or even lim(f, c, -)
c L? ⋮ ⋮ 24.999999 0.5999999893418593 24.99999 0.5999998930988751 24.9999 0.5999989333095628 24.999 0.5999893328251538 24.99 0.5998932823395853 24.9 0.5989282061387314
Example: the secant line
Let
As
To look at the limit in this example, we have (recycling the values in hs
):
c = 1
f(x) = x^x
sec_line_slope(h) = (f(c+h) - f(c)) / h
lim(sec_line_slope, 0)
0.1 1.1053424105457577 0.01 1.0100503341741618 0.001 1.00100050033336 0.0001 1.0001000049997266 1.0e-5 1.0000100000517875 1.0e-6 1.000001000006634 ⋮ ⋮ c L? ⋮ ⋮ -1.0e-6 0.9999990000508774 -1.0e-5 0.9999900000501326 -0.0001 0.9999000049998319 -0.001 0.9990004996667246 -0.01 0.9900496674924875 -0.1 0.9046742391703777
The limit looks like
18.2.1 Issues with the numeric approach
The numeric approach often gives a good intuition as to the existence of a limit and its value. However, it can be misleading. Consider this limit question:
We can see that it is indeterminate of the form
g(x) = (1 - cos(x)) / x^2
g(0)
NaN
What is the value of
lim(g, 0; n=9)
0.1 0.49958347219741783 0.01 0.4999958333473664 0.001 0.49999995832550326 0.0001 0.4999999969612645 1.0e-5 0.5000000413701854 1.0e-6 0.5000444502911705 1.0e-7 0.4996003610813205 1.0e-8 0.0 1.0e-9 0.0 ⋮ ⋮ c L? ⋮ ⋮ -1.0e-9 0.0 -1.0e-8 0.0 -1.0e-7 0.4996003610813205 -1.0e-6 0.5000444502911705 -1.0e-5 0.5000000413701854 -0.0001 0.4999999969612645 -0.001 0.49999995832550326 -0.01 0.4999958333473664 -0.1 0.49958347219741783
Hmm, the values in ys
appear to be going to 0.5
due to loss of precision in subtraction. To investigate, we look more carefully at the two ratios:
xs = [1/10^i for i in 1:8]
y1s = [1 - cos(x) for x in xs]
y2s = [x^2 for x in xs]
[xs y1s y2s]
8×3 Matrix{Float64}:
0.1 0.00499583 0.01
0.01 4.99996e-5 0.0001
0.001 5.0e-7 1.0e-6
0.0001 5.0e-9 1.0e-8
1.0e-5 5.0e-11 1.0e-10
1.0e-6 5.00044e-13 1.0e-12
1.0e-7 4.996e-15 1.0e-14
1.0e-8 0.0 1.0e-16
Looking at the bottom of the second column reveals the error. The value of 1 - cos(1.0e-8)
is 0
and not a value around 5e-17
, as would be expected from the pattern above it. This is because the smallest floating point value less than 1.0
is more than 5e-17
units away, so cos(1e-8)
is evaluated to be 1.0
. There just isn’t enough granularity to get this close to
Not that we needed to. The answer would have been clear if we had stopped with x=1e-6
(with n=6
) say.
In general, some functions will frustrate the numeric approach. It is best to be wary of results. At a minimum they should confirm what a quick graph shows, though even that isn’t enough, as this next example shows.
Example
Let
The question is to investigate
A plot shows the answer appears to be straightforward:
h(x) = x^2 + 1 + log(abs(11*x - 15))/99
plot(h, 15/11 - 1, 15/11 + 1)
Taking values near
c = 15/11
lim(h, c; n = 16)
1.4636363636363636 3.14319413404761 1.3736363636363635 2.8645811533102963 1.3646363636363634 2.816678263482071 1.3637363636363635 2.7909642927447527 1.3636463636363636 2.7674603930164814 1.3636373636363635 2.7441774121775575 1.3636364636363636 2.720916522338558 1.3636363736363635 2.6976578414707397 1.3636363646363636 2.6743993809900446 1.3636363637363635 2.651140928572084 1.3636363636463635 2.627882427796284 1.3636363636373636 2.6046233399900136 1.3636363636364635 2.581345310110671 1.3636363636363735 2.5579552637136134 1.3636363636363646 2.534529865784849 1.3636363636363635 2.5164312852875934 ⋮ ⋮ c L? ⋮ ⋮ 1.3636363636363635 2.5164312852875934 1.3636363636363624 2.5374357453045584 1.3636363636363535 2.5581195113991324 1.3636363636362636 2.581361641577064 1.3636363636353634 2.604626602057928 1.3636363636263635 2.6278825908598917 1.3636363635363635 2.65114096065026 1.3636363626363635 2.6743993787978617 1.3636363536363636 2.6976577870884038 1.3636362636363635 2.7209159769166362 1.3636353636363636 2.744171957635366 1.3636263636363635 2.767405847562099 1.3635363636363635 2.7904188381993147 1.3626363636363636 2.8112237180275277 1.3536363636363635 2.8100356987648416 1.2636363636363634 2.5977395885930648
(Though the graph and table do hint at something a bit odd – the graph shows a blip, the table doesn’t show values in the second column going towards a specific value.)
However the limit in this case is
18.2.2 Richardson extrapolation
The Richardson
package provides a function to extrapolate a function f(x)
to f(x0)
, as the numeric limit does. We illustrate its use by example:
f(x) = sin(x)/x
extrapolate(f, 1)
(0.9999999999922424, 4.538478481919128e-9)
The answer involves two terms, the second being an estimate for the error in the estimation of f(0)
.
The values the method chooses could be viewed as follows:
extrapolate(1) do x # using `do` notation for the function
@show x
sin(x)/x
end
x = 1.0
x = 0.125
x = 0.015625
x = 0.001953125
x = 0.000244140625
(0.9999999999922424, 4.538478481919128e-9)
The extrapolate
function avoids the numeric problems encountered in the following example
f(x) = (1 - cos(x)) / x^2
extrapolate(f, 1)
(0.5000000007193545, 4.705535960880525e-11)
To find limits at a value of c
not equal to 0
, we set the x0
argument. For example,
f(x) = (sqrt(x) - 5) / (sqrt(x-16) - 3)
c = 25
extrapolate(f, 1, x0=25)
(0.6000000000063564, 1.734057342162032e-12)
This value can also be Inf
, in anticipation of infinite limits to be discussed in a subsequent section:
f(x) = (x^2 - 2x + 1)/(x^3 - 3x^2 + 2x + 1)
extrapolate(f, 10, x0=Inf)
(0.0, 0.0)
(The starting value should be to the right of any zeros of the denominator.)
18.3 Symbolic approach to limits
The SymPy
package provides a limit
function for finding the limit of an expression in a given variable. It must be loaded, as was done initially. The limit
function’s use requires the expression, the variable and a value for
For example, the limit at
@syms x::real
limit((1 - cos(x)) / x^2, x => 0)
The pair notation (x => 0
) is used to indicate the variable and the value it is going to. A dir
argument is used to indicate dir="+"
), dir="-"
), and dir="+-"
).
Example
We look again at this function which despite having a vertical asymptote at
We find the limit symbolically at 15//11
and not the floating point approximation returned by 15/11
:
h(x) = x^2 + 1 + log(abs(11x - 15))/99
limit(h(x), x => 15 // 11)
Example
Find the limits:
We have for the first:
limit( (2sin(x) - sin(2x)) / (x - sin(x)), x => 0; dir="+-")
(The dir = "+-"
indicates take both a right and left limit and ensure both exist and are equal.)
The second is similarly done, though here we define a function for variety:
f(x) = (exp(x) - 1 - x) / x^2
limit(f(x), x => 0; dir="+-")
Finally, for the third we define a new variable and proceed:
@syms rho::real
limit( (x^(1-rho) - 1) / (1 - rho), rho => 1; dir="+-")
This last limit demonstrates that the limit
function of SymPy
can readily evaluate limits that involve parameters, though at times some assumptions on the parameters may be needed, as was done through rho::real
.
However, for some cases, the assumptions will not be enough, as they are broad. (E.g., something might be true for some values of the parameter and not others and these values aren’t captured in the assumptions.) So the user must be mindful that when parameters are involved, the answer may not reflect all possible cases.
Example: floating point conversion issues
The Gruntz algorithm implemented in SymPy
for symbolic limits is quite powerful. However, some care must be exercised to avoid undesirable conversions from exact values to floating point values.
In a previous example, we used 15//11
and not 15/11
, as the former converts to an exact symbolic value for use in SymPy
, but the latter would be approximated in floating point before this conversion so the exactness would be lost.
To illustrate further, let’s look at the limit as
j(x) = cos(x) / (x - pi/2)
j(pi/2)
Inf
The value is not NaN
, rather Inf
. This is because cos(pi/2)
is not exactly pi/2
is rounded to a floating point number. This minor difference is important. If we try and correct for this by using PI
we have:
limit(j(x), x => PI/2)
The value is not right, as this simple graph suggests the limit is in fact
plot(j, pi/4, 3pi/4)
The difference between pi
and PI
can be significant, and though usually pi
is silently converted to PI
, it doesn’t happen here as the division by 2
happens first, which turns the symbol into an approximate floating point number. Hence, SymPy
is giving the correct answer for the problem it is given, it just isn’t the problem we wanted to look at.
Trying again, being more aware of how pi
and PI
differ, we have:
f(x) = cos(x) / (x - PI/2)
limit(f(x), x => PI/2)
(The value pi
is able to be exactly converted to PI
when used in SymPy
, as it is of type Irrational
, and is not a floating point value. However, the expression pi/2
converts pi
to a floating point value and then divides by 2
, hence the loss of exactness when used symbolically.)
Example: left and right limits
Right and left limits will be discussed in the next section; here we give an example of the idea. The mathematical convention is to say a limit exists if both the left and right limits exist and are equal. Informally a right (left) limit at limit
function of SymPy
finds directional limits by default, a right limit, where
The left limit can be found by passing the argument dir="-"
. Passing dir="+-"
(and not "-+"
), as done in a few examples above, will compute the mathematical limit, throwing an error in Python
if no limit exists.
limit(ceil(x), x => 0), limit(ceil(x), x => 0, dir="-")
(1, 0)
This accurately shows the limit does not exist mathematically, but limit(ceil(x), x => 0)
does exist (as it finds a right limit).
18.4 Rules for limits
The limit
function doesn’t compute limits from the definition, rather it applies some known facts about functions within a set of rules. Some of these rules are the following. Suppose the individual limits of
These are verbally described as follows, when the individual limits exist and are finite then:
- Limits involving sums, differences or scalar multiples of functions exist and can be computed by first doing the individual limits and then combining the answers appropriately.
- Limits of products exist and can be found by computing the limits of the individual factors and then combining.
- Limits of ratios exist and can be found by computing the limit of the individual terms and then dividing provided you don’t divide by
. The last part is really important, as this rule is no help with the common indeterminate form .
In addition, consider the composition:
Suppose that
- The outer limit,
, exists, and - the inner limit,
, exists and - for some neighborhood around
(not including ) is not ,
Then the limit exists and equals
An alternative, is to assume
These rules, together with the fact that our basic algebraic functions have limits that can be found by simple evaluation, mean that many limits are easy to compute.
Example: composition
For example, consider for some non-zero
This is clearly related to the function
Basically when taking a limit as
Similarly,
as this is the limit of
Example: products
Consider this more complicated limit found on this Wikipedia page.
We know the first factor has a limit found by evaluation:
l(x) = cos(PI*x) / (1 - (2x)^2)
limit(l(x), x => 1//2)
Putting together, we would get
limit(sin(PI*x)/(PI*x) * l(x), x => 1//2)
Example: ratios
Consider again the limit of
plot(cos(pi*x), 0.4, 0.6)
plot!(1 - (2x)^2)
However, following Euler’s insight that
plot(cos(pi*x), 0.4, 0.6)
plot!(-pi*(x - 1/2))
plot(1 - (2x)^2, 0.4, 0.6)
plot!(-4(x - 1/2))
So around
This is the basis of L’Hôpital’s rule, which we will return to once the derivative is discussed.
Example: sums
If it is known that the following limit exists by some means:
Then this limit will exist
Why? We can express the function
18.4.1 The squeeze theorem
We note one more limit law. Suppose we wish to compute
- for all
near (possibly not including ) . - These limits exist and are equal:
.
Then the limit of
18.5 Limits from the definition
The formal definition of a limit involves clarifying what it means for
The key is identifying a value for
A simple case is the linear case. Consider the function
We show “numerically” that
f(x) = 3x + 2
c, L = 1, 5
epsilon = rand() # some number in (0,1)
delta = epsilon / 3
xs = c .+ delta * rand(100) # 100 numbers, c < x < c + delta
as = [abs(f(x) - L) < epsilon for x in xs]
all(as) # are all the as true?
true
These lines produce a random true
if
(Of course, we should also verify values to the left of
(The random numbers are technically in epsilon
could be 0
. So the above approach would be more solid if some guard, such as epsilon = max(eps(), rand())
, was used. As the formal definition is the domain of paper-and-pencil, we don’t fuss.)
In this case,
18.6 Questions
Question
From the graph, find the limit:
Question
From the graph, find the limit
Question
Graphically investigate the limit
What is the value of
Question
Graphically investigate the limit
The limit exists, what is the value?
Question
Select the graph for which there is no limit at
Question
The following limit is commonly used:
Factoring out
What is
Question
The following limit is commonly used:
The answer should depend on
Using the last result, what is the value of
Question
Find the limit as
Question
Find the limit as
Question
Find the limit as
Question
Find the limit
Question
The limit of limit(sind(x)/x, x => 0)
. What is the value?
What is the limit limit(sinpi(x)/x, x => 0)
?
Question: limit properties
There are several properties of limits that allow one to break down more complicated problems into smaller subproblems. For example,
is notation to indicate that one can take a limit of the sum of two function or take the limit of each first, then add and the answer will be unchanged, provided all the limits in question exist.
Use one or the either to find the limit of
Question
The key assumption made above in being able to write
when
This example shows why it is important.
Take
We have
Question
Does this function have a limit as
Question
Compute the limit
Question
Compute the limit
Question
Compute the limit
Question
Compute the limit
Question
Some limits involve parameters. For example, suppose we define ex
as follows:
@syms m::real k::real
ex = (1 + k*x)^(m/x)
What is limit(ex, x => 0)
?
Question
For a given
Question
For positive integers
Question
What does SymPy
find for the limit of ex
(limit(ex, x => 0)
), as defined here:
@syms x a
ex = (a^x - 1)/x
Should SymPy
have needed an assumption like
@syms a::positive
(a,)
Question
The limit
For
Can be computed symbolically two different ways:
@syms x
a, b = 3, 2
f(x) = ((a^x - x*log(a))/(b^x - x*log(b)))^(1/x^2)
limit(f(x), x=>0)
or
@syms x a b
f(x) = ((a^x - x*log(a))/(b^x - x*log(b)))^(1/x^2)
L = limit(f(x), x=>0)
L(a => 3, b=>2)
Which is correct?
Question: The squeeze theorem
Let’s look at the function
f(x) = x == 0 ? NaN : x * sin(1/x)
c, delta = 0, 1/4
plot(f, c - delta, c + delta)
plot!(abs)
plot!(x -> -abs(x))
This graph clearly oscillates near
(The Wikipedia entry for the squeeze theorem has this unverified, but colorful detail:
In many languages (e.g. French, German, Italian, Hungarian and Russian), the squeeze theorem is also known as the two policemen (and a drunk) theorem, or some variation thereof. The story is that if two policemen are escorting a drunk prisoner between them, and both officers go to a cell, then (regardless of the path taken, and the fact that the prisoner may be wobbling about between the policemen) the prisoner must also end up in the cell.
Question
Archimedes, in finding bounds on the value of sin
and cos
computable by simple geometry. See Damini and Abhishek) These exact solutions led to subsequent bounds. A more modern approach to bound the circumference of a circle of radius
@syms theta::real r::real
(theta, r)
x = r * tan(theta/2)
n = 2PI/theta # using PI to avoid floaing point roundoff in 2pi
# C < n * 2x
upper = n*2x
A lower bound would use the triangle with angle
x = r*sin(theta/2)
n = 2PI/theta
# C > n * 2x
lower = n*2x
Using the above, find the limit of upper
and lower
. Are the two equal and equal to a familiar value?
(If so, then the squeeze theorem would say that