using CalculusWithJulia, Plots
plotly()
Plots.PlotlyBackend()
This section will use the following add-on packages:
A mathematical function is defined abstractly by:
Function: A function is a relation which assigns to each element in the domain a single element in the range. A relation is a set of ordered pairs, \((x,y)\). The set of first coordinates is the domain, the set of second coordinates the range of the relation.
That is, a function gives a correspondence between values in its domain with values in its range.
This definition is abstract, as functions can be very general. With single-variable calculus, we generally specialize to real-valued functions of a single variable (univariate, scalar functions). These typically have the correspondence given by a rule, such as \(f(x) = x^2\) or \(f(x) = \sqrt{x}\). The function’s domain may be implicit (as in all \(x\) for which the rule is defined) or may be explicitly given as part of the rule. The function’s range is then the image of its domain, or the set of all \(f(x)\) for each \(x\) in the domain (\(\{f(x): x \in \text{ domain}\}\)).
Some examples of mathematical functions are:
\[ f(x) = \cos(x), \quad g(x) = x^2 - x, \quad h(x) = \sqrt{x}, \quad s(x) = \begin{cases} -1 & x < 0\\1&x>0\end{cases}. \]
For these examples, the domain of both \(f(x)\) and \(g(x)\) is all real values of \(x\), where as for \(h(x)\) it is implicitly just the set of non-negative numbers, \([0, \infty)\). Finally, for \(s(x)\), we can see that the domain is defined for every \(x\) but \(0\).
In general the range is harder to identify than the domain, and this is the case for these functions too. For \(f(x)\) we may know the \(\cos\) function is trapped in \([-1,1]\) and it is intuitively clear than all values in that set are possible. The function \(h(x)\) would have range \([0,\infty)\). The \(s(x)\) function is either \(-1\) or \(1\), so only has two possible values in its range. What about \(g(x)\)? It is a parabola that opens upward, so any \(y\) values below the \(y\) value of its vertex will not appear in the range. In this case, the symmetry indicates that the vertex will be at \((1/2, -1/4)\), so the range is \([-1/4, \infty)\).
Thanks to Euler (1707-1783): The formal idea of a function is a relatively modern concept in mathematics. According to Dunham,
Euler defined a function as an “analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities.” He goes on to indicate that as Euler matured, so did his notion of function, ending up closer to the modern idea of a correspondence not necessarily tied to a particular formula or “analytic expression.” He finishes by saying: “It is fair to say that we now study functions in analysis because of him.”
We will see that defining functions within Julia
can be as simple a concept as Euler started with, but that the more abstract concept has a great advantage that is exploited in the design of the language.
The notation Julia
uses to define simple mathematical functions could not be more closely related to how they are written mathematically. For example, the functions \(f(x)\), \(g(x)\), and \(h(x)\) above may be defined by:
The left-hand side of the equals sign is an assignment. In this use, a function with a given signature is defined and attached to a method table for the given function name. The right-hand side is simply Julia
code to compute the rule corresponding to the function.
Calling the function also follows standard math notation:
For typical cases like the three above, there isn’t really much new to learn.
The equals sign in Julia
always indicates either an assignment or a mutation of the object on the left side. The definition of a function above is an assignment, in that a function is added (or modified) in a table holding the methods associated with the function’s name.
The equals sign restricts the expressions available on the left-hand side to a) a variable name, for assignment; b) mutating an object at an index, as in xs[1]
; c) mutating a property of a struct; or d) a function assignment following this form function_name(args...)
.
Whereas function definitions and usage in Julia
mirrors standard math notation; equations in math are not so mirrored in Julia
. In mathematical equations, the left-hand of an equation is typically a complicated algebraic expression. Not so with Julia
, where the left hand side of the equals sign is prescribed and quite limited.
Functions in Julia
have an implicit domain, just as they do mathematically. In the case of \(f(x)\) and \(g(x)\), the right-hand side is defined for all real values of \(x\), so the domain is all \(x\). For \(h(x)\) this isn’t the case, of course. Trying to call \(h(x)\) when \(x < 0\) will give an error:
LoadError: DomainError with -1.0:
sqrt was called with a negative real argument but will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
The DomainError
is one of many different error types Julia
has, in this case it is quite apt: the value \(-1\) is not in the domain of the function.
Mathematically we tend to blur the distinction between the equation
\[ y = 5/9 \cdot (x - 32) \]
and the function
\[ f(x) = 5/9 \cdot (x - 32) \]
In fact, the graph of a function \(f(x)\) is simply defined as the graph of the equation \(y=f(x)\). There is a distinction in Julia
as a command such as
will evaluate the right-hand side with the value of x
bound at the time of assignment to y
, whereas assignment to a function
will create a function object with a value of x
determined at a later time - the time the function is called. So the value of x
defined when the function is created is not important here (as the value of x
used by f
is passed in as an argument).
Within Julia
, we make note of the distinction between a function object versus a function call. In the definition f(x)=cos(x)
, the variable f
refers to a function object, whereas the expression f(pi)
is a function call. This mirrors the math notation where an \(f\) is used when properties of a function are being emphasized (such as \(f \circ g\) for composition) and \(f(x)\) is used when the values related to the function are being emphasized (such as saying “the plot of the equation \(y=f(x)\)).
Distinguishing these three related but different concepts (equations, function objects, and function calls) is important when modeling on the computer.
The definition of \(s(x)\) above has two cases:
\[ s(x) = \begin{cases} -1 & s < 0\\ 1 & s > 0. \end{cases} \]
We learn to read this as: when \(s\) is less than \(0\), then the answer is \(-1\). If \(s\) is greater than \(0\) the answer is \(1.\) Often - but not in this example - there is an “otherwise” case to catch those values of \(x\) that are not explicitly mentioned. As there is no such “otherwise” case here, we can see that this function has no definition when \(x=0\). This function is often called the “sign” function and is also defined by \(\lvert x\rvert/x\). (Julia
’s sign
function actually defines sign(0)
to be 0
.)
How do we create conditional statements in Julia
? Programming languages generally have “if-then-else” constructs to handle conditional evaluation. In Julia
, the following code will handle the above condition:
The “otherwise” case would be caught with an else
addition. So, for example, this would implement Julia
’s definition of sign
(which also assigns \(0\) to \(0\)):
The conditions for the if
statements are expressions that evaluate to either true
or false
, such as generated by the Boolean operators <
, <=
, ==
, !=
, >=
, and >
.
If familiar with if
conditions, they are natural to use. However, for simpler cases of “if-else” Julia
provides the more convenient ternary operator: cond ? if_true : if_false
. (The name comes from the fact that there are three arguments specified.) The ternary operator checks the condition and if true returns the first expression, whereas if the condition is false the second condition is returned. (Another useful control flow construct is short-circuit evaluation.)
For example, here is one way to define an absolute value function:
The condition is x >= 0
- or is x
non-negative? If so, the value x
is used, otherwise -x
is used.
Here is a means to implement a function which takes the larger of x
or 10
:
(This could also utilize the max
function: f(x) = max(x, 10.0)
.)
Or similarly, a function to represent a cell phone plan where the first \(500\) minutes are \(20\) dollars and every additional minute is \(5\) cents:
Type stability. These last two definitions used 10.0
and 20.0
instead of the integers 10
and 20
for the answer. Why the extra typing? When Julia
can predict the type of the output from the type of inputs, it can be more efficient. So when possible, we help out and ensure the output is always the same type.
The ternary
operator can be used to define an explicit domain. For example, a falling body might have height given by \(h(t) = 10 - 16t^2\). This model only applies for non-negative \(t\) and non-negative \(h\) values. So, in particular \(0 \leq t \leq \sqrt{10/16}\). To implement this function we might have:
The function s(x)
isn’t quite so easy to implement, as there isn’t an “otherwise” case. We could use an if
statement, but instead illustrate using a second, nested ternary operator:
With nested ternary operators, the advantage over the if
condition is not always compelling, but for simple cases the ternary operator is quite useful.
For more complicated functions, say one with a few steps to compute, an alternate form for defining a function can be used:
function function_name(function_arguments)
...function_body...
end
The last value computed is returned unless the function_body
contains an explicit return
statement.
For example, the following is a more verbose way to define \(sq(x) = x^2\):
The line return x^2
, could have just been x^2
as it is the last (and) only line evaluated.
The return
keyword is not a function, so is not called with parentheses. An empty return
statement will return a value of nothing
.
Imagine we have the following complicated function related to the trajectory of a projectile with wind resistance:
\[ f(x) = \left(\frac{g}{k v_0\cos(\theta)} + \tan(\theta) \right) x + \frac{g}{k^2}\ln\left(1 - \frac{k}{v_0\cos(\theta)} x \right) \]
Here \(g\) is the gravitational constant \(9.8\) and \(v_0\), \(\theta\) and \(k\) parameters, which we take to be \(200\), \(45\) degrees and \(1/2\) respectively. With these values, the above function can be computed when \(x=100\) with:
function trajectory(x)
g, v0, theta, k = 9.8, 200, 45*pi/180, 1/2
a = v0 * cos(theta)
(g/(k*a) + tan(theta))* x + (g/k^2) * log(1 - k/a*x)
end
trajectory (generic function with 1 method)
By using a multi-line function our work is much easier to look over for errors.
This next example, shows how using functions to collect a set of computations for simpler reuse can be very helpful.
An old method for finding a zero of an equation is the secant method. We illustrate the method with the function \(f(x) = x^2 - 2\). In an upcoming example we see how to create a function to evaluate the secant line between \((a,f(a))\) and \((b, f(b))\) at any point. In this example, we define a function to compute the \(x\) coordinate of where the secant line crosses the \(x\) axis. This can be defined as follows:
function secant_intersection(f, a, b)
# solve 0 = f(b) + m * (x-b) where m is the slope of the secant line
# x = b - f(b) / m
m = (f(b) - f(a)) / (b - a)
b - f(b) / m
end
secant_intersection (generic function with 1 method)
We utilize this as follows. Suppose we wish to solve \(f(x) = 0\) and we have two “rough” guesses for the answer. In our example, we wish to solve \(q(x) = x^2 - 2\) and our “rough” guesses are \(1\) and \(2\). Call these values \(a\) and \(b\). We improve our rough guesses by finding a value \(c\) which is the intersection point of the secant line.
In our example, we see that in trying to find an answer to \(f(x) = 0\) ( \(\sqrt{2}\approx 1.414\dots\)) our value found from the intersection point is a better guess than either \(a=1\) or \(b=2\):
Still, q(𝒄)
is not really close to \(0\):
But it is much closer than either \(q(a)\) or \(q(b)\), so it is an improvement. This suggests renaming \(a\) and \(b\) with the old \(b\) and \(c\) values and trying again we might do better still:
Yes, now the function value at this new \(c\) is even closer to \(0\). Trying a few more times we see we just get closer and closer. Here we start again to see the progress
𝒂,𝒃 = 1, 2
for step in 1:6
𝒂, 𝒃 = 𝒃, secant_intersection(q, 𝒂, 𝒃)
current = (c=𝒃, qc=q(𝒃))
@show current
end
current = (c = 1.3333333333333335, qc = -0.22222222222222188)
current = (c = 1.4000000000000001, qc = -0.03999999999999959)
current = (c = 1.4146341463414633, qc = 0.0011897679952408424)
current = (c = 1.41421143847487, qc = -6.007286838860537e-6)
current = (c = 1.4142135620573204, qc = -8.931455575122982e-10)
current = (c = 1.4142135623730954, qc = 8.881784197001252e-16)
Now our guess \(c\) is basically the same as sqrt(2)
. Repeating the above leads to only a slight improvement in the guess, as we are about as close as floating point values will allow.
Here we see a visualization with all these points. As can be seen, it quickly converges at the scale of the visualization, as we can’t see much closer than 1e-2
.
In most cases, this method can fairly quickly find a zero provided two good starting points are used.
Consider two functions implementing the slope-intercept form and point-slope form of a line:
\[ f(x) = m \cdot x + b, \quad g(x) = y_0 + m \cdot (x - x_0). \]
Both functions use the variable \(x\), but there is no confusion, as we learn that this is just a dummy variable to be substituted for and so could have any name. Both also share a variable \(m\) for a slope. Where does that value come from? In practice, there is a context that gives an answer. Despite the same name, there is no expectation that the slope will be the same for each function if the context is different. So when parameters are involved, a function involves a rule and a context to give specific values to the parameters. Euler had said initially that functions composed of “the variable quantity and numbers or constant quantities.” The term “variable,” we still use, but instead of “constant quantities,” we use the name “parameters.”
Something similar is also true with Julia
. Consider the example of writing a function to model a linear equation with slope \(m=2\) and \(y\)-intercept \(3\). A typical means to do this would be to define constants, and then use the familiar formula:
This will work as expected. For example, \(f(0)\) will be \(b\) and \(f(2)\) will be \(7\):
All fine, but what if somewhere later the values for \(m\) and \(b\) were redefined, say with \(m,b = 3,2\)?
Now what happens with \(f(0)\)? When \(f\) was defined b
was \(3\), but now if we were to call f
, b
is \(2\). Which value will we get? More generally, when f
is being evaluated in what context does Julia
look up the bindings for the variables it encounters? It could be that the values are assigned when the function is defined, or it could be that the values for the parameters are resolved when the function is called. If the latter, what context will be used?
Before discussing this, let’s just see in this case:
So the b
is found from the currently stored value. This fact can be exploited. we can write template-like functions, such as f(x)=m*x+b
and reuse them just by updating the parameters separately.
How Julia
resolves what a variable refers to is described in detail in the manual page Scope of Variables. In this case, the function definition finds variables in the context of where the function was defined, the main workspace, and not where it is called. As seen, this context can be modified after the function definition and prior to the function call. It is only when b
is needed, that the context is consulted, so the most recent binding is retrieved. Contexts allow the user to repurpose variable names without there being name collision. For example, we typically use x
as a function argument, and different contexts allow this x
to refer to different values.
Mostly this works as expected, but at times it can be complicated to reason about. In our example, definitions of the parameters can be forgotten, or the same variable name may have been used for some other purpose. The potential issue is with the parameters, the value for x
is straightforward, as it is passed into the function. However, we can also pass the parameters, such as \(m\) and \(b\), as arguments. For parameters, one suggestion is to use keyword arguments. These allow the specification of parameters, but also give a default value. This can make usage explicit, yet still convenient. For example, here is an alternate way of defining a line with parameters m
and b
:
The right-hand side is identical to before, but the left hand side is different. Arguments defined after a semicolon are keyword arguments. They are specified as var=value
(or var::Type=value
to restrict the type) where the value is used as the default, should a value not be specified when the function is called.
Calling a function with keyword arguments can be identical to before:
During this call, values for m
and b
are found from how the function is called, not the main workspace. In this case, nothing is specified so the defaults of \(m=1\) and \(b=0\) are used. Whereas, this call will use the user-specified values for m
and b
:
Keywords are used to mark the parameters whose values are to be changed from the default. Though one can use positional arguments for parameters - and there are good reasons to do so - using keyword arguments is a good practice if performance isn’t paramount, as their usage is more explicit yet the defaults mean that a minimum amount of typing needs to be done. Keyword arguments are widely used with plotting commands, as there are numerous options to adjust, but typically only a handful adjusted per call.
In the example for multi-line functions we hard coded many variables inside the body of the function. In practice it can be better to pass these in as parameters along the lines of:
f(x,p)
style for parameterizationAn alternative to keyword arguments is to bundle the parameters into a container and pass them as a single argument to the function. The idiom in Julia
is to use the second argument for parameters, or f(x, p)
for the function argument specifications. This style is used in the very popular SciML
suite of packages.
For example, here we use a named tuple to pass parameters to f
:
function trajectory(x ,p)
g, v0, theta, k = p.g, p.v0, p.theta, p.k # unpack parameters
a = v0 * cos(theta)
(g/(k*a) + tan(theta))* x + (g/k^2) * log(1 - k/a*x)
end
p = (g=9.8, v0=200, theta = 45*pi/180, k=1/2)
trajectory(100, p)
96.7577179163216
The style isn’t so different from using keyword arguments, save the extra step of unpacking the parameters. The big advantage is consistency – the function is always called in an identical manner regardless of the number of parameters (or variables).
The concept of a function is of much more general use than its restriction to mathematical functions of single real variable. A natural application comes from describing basic properties of geometric objects. The following function definitions likely will cause no great concern when skimmed over:
Area(w, h) = w * h # of a rectangle
Volume(r, h) = pi * r^2 * h # of a cylinder
SurfaceArea(r, h) = pi * r * (r + sqrt(h^2 + r^2)) # of a right circular cone, including the base
SurfaceArea (generic function with 1 method)
The right-hand sides may or may not be familiar, but it should be reasonable to believe that if push came to shove, the formulas could be looked up. However, the left-hand sides are subtly different - they have two arguments, not one. In Julia
it is trivial to define functions with multiple arguments - we just did.
Earlier we saw the log
function can use a second argument to express the base. This function is basically defined by log(b,x)=log(x)/log(b)
. The log(x)
value is the natural log, and this definition just uses the change-of-base formula for logarithms.
But not so fast, on the left side is a function with two arguments and on the right side the functions have one argument - yet they share the same name. How does Julia
know which to use? Julia
uses the number, order, and type of the positional arguments passed to a function to determine which function definition to use. This is technically known as multiple dispatch or polymorphism. As a feature of the language, it can be used to greatly simplify the number of functions the user must learn. The basic idea is that many functions are “generic” in that they have methods which will work differently in different scenarios.
Multiple dispatch is very common in mathematics. For example, we learn different ways to add: integers (fingers, carrying), real numbers (align the decimal points), rational numbers (common denominators), complex numbers (add components), vectors (add components), polynomials (combine like monomials), … yet we just use the same +
notation for each operation. The concepts are related, the details different.
Julia
is similarly structured. Julia
terminology would be to call the operation “+
” a generic function and the different implementations methods of “+
”. This allows the user to just need to know a smaller collection of generic concepts yet still have the power of detail-specific implementations. To see how many different methods are defined in the base Julia
language for the +
operator, we can use the command methods(+)
. As there are so many (\(\approx 200\)) and that number is growing, we illustrate how many different logarithm methods are implemented for “numbers:”
(The arguments have type annotations such as x::Float64
or x::BigFloat
. Julia
uses these to help resolve which method should be called for a given set of arguments. This allows for different operations depending on the variable type. For example, in this case, the log
function for Float64
values uses a fast algorithm, whereas for BigFloat
values an algorithm that can handle multiple precision is used.)
A common usage of multiple dispatch is, as is done with log
above, to restrict the type of an argument and define a method for just this type. Types in Julia
can be abstract or concrete. This distinction is important when construction composite types (which we are not doing here), but otherwise not so important. In the following example, we use the abstract types Integer
, Real
, and Complex
to define methods for a function we call twotox
:
function twotox(x::Integer)
(2//1)^x
end
twotox(x::Real) = (2.0)^x
twotox(x::Complex) = (2.0 + 0.0im)^x
twotox (generic function with 3 methods)
This is for illustration purposes – the latter two are actually already done through Julia
’s promotion mechanism – but we see that twotox
will return a rational number when x
is an integer unlike Julia
which, when x
is non-negative will return an integer and will otherwise will error or return a float (when x
is a numeric literal, like 2^(-3)
).
The key to reading the above is the type annotation acts like a gatekeeper allowing in only variables of that type.
For example, the number 2
is parsed as a 64-bit integer (typically) and has concrete type Int64
which is a subtype of Integer
. So twotox(2)
will use the first definition, and return a rational number. Whereas, the number 2.0
is parsed as a floating point number with concrete type Float64
which is a subtype of Real
, not Integer
, so twotox(2.0)
will use the second method defined above.
As mentioned Julia
’s multiple dispatch allows multiple functions with the same name. The function that gets selected depends not just on the type of the arguments, but also on the number of arguments given to the function. We can exploit this to simplify our tasks. For example, consider this optimization problem:
For all rectangles of perimeter \(20\), what is the one with largest area?
The start of this problem is to represent the area in terms of one variable. We see next that composition can simplify this task, which when done by hand requires a certain amount of algebra.
Representing the area of a rectangle in terms of two variables is easy, as the familiar formula of width times height applies:
But the other fact about this problem - that the perimeter is \(20\) - means that height depends on width. For this question, we can see that \(P=2w + 2h\) so that - as a function - height
depends on w
as follows:
By hand we would substitute this last expression into that for the area and simplify (to get \(A=w\cdot (20-2 \cdot w)/2 = -w^2 + 10w\)). However, within Julia
we can let composition do the substitution and leave the algebraic simplification for Julia
to do:
This might seem odd, just like with log
, we now have two different but related functions named Area
. Julia will decide which to use based on the number of arguments when the function is called. This setup allows both to be used on the same line, as above. This usage style is not so common with many computer languages, but is a feature of Julia
which is built around the concept of generic functions with multiple dispatch rules to decide which rule to call.
For example, jumping ahead a bit, the plot
function of Plots
expects functions of a single numeric variable. Behind the scenes, then the function A(w)
will be used in this graph:
From the graph, we can see that that width for maximum area is \(w=5\) and so \(h=5\) as well.
Julia
has both generic functions and anonymous functions. Generic functions participate in multiple dispatch, a central feature of Julia
. Anonymous functions are very useful with higher-order programming (passing functions as arguments). These notes occasionally take advantage of anonymous functions for convenience.
Simple mathematical functions have a domain and range which are a subset of the real numbers, and generally have a concrete mathematical rule. However, the definition of a function is much more abstract. We’ve seen that functions for computer languages can be more complicated too, with, for example, the possibility of multiple input values. Things can get more abstract still.
Take for example, the idea of the shift of a function. The following mathematical definition of a new function \(g\) related to a function \(f\):
\[ g(x) = f(x-c) \]
has an interpretation - the graph of \(g\) will be the same as the graph of \(f\) shifted to the right by \(c\) units. That is \(g\) is a transformation of \(f\). From one perspective, the act of replacing \(x\) with \(x-c\) transforms a function into a new function. Mathematically, when we focus on transforming functions, the word operator is sometimes used. This concept of transforming a function can be viewed as a certain type of function, in an abstract enough way. The relation would be to just pair off the functions \((f,g)\) where \(g(x) = f(x-c)\).
With Julia
we can represent such operations. The simplest thing would be to do something like:
Then \(g\) has the graph of \(f\) shifted by 3 units to the right. Now f
above refers to something in the main workspace, in this example a specific function. Better would be to allow f
to be an argument of a function, like this:
shift_right (generic function with 1 method)
That takes some parsing. In the body of the shift_right
is the definition of a function. But this function has no name– it is anonymous. But what it does should be clear - it subtracts \(c\) from \(x\) and evaluates \(f\) at this new value. Since the last expression creates a function, this function is returned by shift_right
.
So we could have done something more complicated like:
Then l
is a function that is derived from f
.
The value of c
used when l
is called is the one passed to shift_right
. Functions like l
that are returned by other functions also are called closures, as the context they are evaluated within includes the context of the function that constructs them.
Anonymous functions can be created with the function
keyword, but we will use the “arrow” notation, arg->body
to create them, The above, could have been defined as:
When the ->
is seen a function is being created.
Generic versus anonymous functions. Julia has two types of functions, generic ones, as defined by f(x)=x^2
and anonymous ones, as defined by x -> x^2
. One gotcha is that Julia
does not like to use the same variable name for the two types. In general, Julia is a dynamic language, meaning variable names can be reused with different types of variables. But generic functions take more care, as when a new method is defined it gets added to a method table. So repurposing the name of a generic function for something else is not allowed. Similarly, repurposing an already defined variable name for a generic function is not allowed. This comes up when we use functions that return functions as we have different styles that can be used: When we defined l = shift_right(f, c=3)
the value of l
is assigned an anonymous function. This binding can be reused to define other variables. However, we could have defined the function l
through l(x) = shift_right(f, c=3)(x)
, being explicit about what happens to the variable x
. This would add a method to the generic function l
. Meaning, we get an error if we tried to assign a variable to l
, such as an expression like l=3
. We generally employ the latter style, even though it involves a bit more typing, as we tend to stick to methods of generic functions for consistency.
A secant line is a line through two points on the graph of a function. If we have a function \(f(x)\), and two \(x\)-values \(x=a\) and \(x=b\), then we can find the slope between the points \((a,f(a))\) and \((b, f(b))\) with:
\[ m = \frac{f(b) - f(a)}{b - a}. \]
The point-slope form of a line then gives the equation of the tangent line as \(y = f(a) + m \cdot (x - a)\).
To model this in Julia
, we would want to turn the inputs f
,a
, b
into a function that implements the secant line (functions are much easier to work with than equations). Here is how we can do it:
secant (generic function with 1 method)
The body of the function nearly mirrors the mathematical treatment. The main difference is in place of \(y = \dots\) we have a x -> ...
to create an anonymous function.
To illustrate the use, suppose \(f(x) = x^2 - 2\) and we have the secant line between \(a=1\) and \(b=2\). The value at \(x=3/2\) is given by:
The last line employs double parentheses. The first pair, secant(f,a,b)
, returns a function and the second pair, (3/2)
, are used to call the returned function.
One main use of anonymous functions is to make closures. We’ve touched on two concepts: functions with parameters and functions as arguments to other functions. The creation of a function for a given set of parameters may be needed. Anonymous functions are used to create closures which capture the values of the parameters. For a simple example, mxplusb
parameterizes any line, but to use a function to represent a specific line, a new function can be created:
specific_line (generic function with 1 method)
The returned object will have its parameters (m
and b
) fixed when used.
In Julia
, the functions Base.Fix1
and Base.Fix2
are provided to take functions of two variables and create callable objects of just one variable, with the other argument fixed. This partial function application is provided by a some of the logical comparison operators, which can be useful with filtering, say.
For example, <(2)
is a funny looking way of expressing the function x -> x < 2
. (Think of x < y
as <(x,y)
and then “fix” the value of y
to be 2
.) This is useful with filtering by a predicate function, for example:
which picks off the values of 0
and 1
in a somewhat obscure way but less verbose than filter(x -> x < 2, 0:4)
.
The Fix2
function is also helpful when using the f(x, p)
form for passing parameters to a function. The result of Base.Fix2(f, p)
is a function with its parameters fixed that can be passed along for plotting or other uses.
do
notationMany functions in Julia
accept a function as the first argument. A common pattern for calling some function is action(f, args...)
where action
is the function that will act on another function f
using the value(s) in args...
. There do
notation is syntactical sugar for creating an anonymous function which is useful when more complicated function bodies are needed.
Here is an artificial example to illustrate of a task we won’t have cause to use in these notes, but is an important skill in some contexts. The do
notation can be confusing to read, as it moves the function definition to the end and not the beginning, but is convenient to write and is used very often with the task of this example.
To save some text to a file requires a few steps: opening the file; writing to the file; closing the file. The open
function does the first. One method has this signature open(f::Function, args...; kwargs....)
and is documented to “Apply the function f to the result of open(args...; kwargs...)
and close the resulting file descriptor upon completion.” Which is great, the open and close stages are handled by Julia
and only the writing is up to the user.
The writing is done in the function of a body, so the do
notation allows the creation of the function to be handled anonymously. In this context, the argument to this function will be an IO
handle, which is typically called io
.
So the pattern would be
The name of the file to open appears, how the file is to be opened (w
means write, r
would mean read), and then a function with argument io
which writes two lines to io
.
The typical calling pattern for a function simply follows mathematical notation, that is f(x)
calls the function f
with the argument x
. There are times – especially with function composition – that an alternative piping syntax is desirable. Julia
provides the infix operation |>
for piping, defining it by |>(x, f) = f(x)
. This allows composition to work left to right, instead of right to left. For example, these two calls produce the same answer:
While convenient, piping works most naturally with functions of a single variable. When more than one variable is needed for a function, say max
, then things get a bit cumbersome. Using anonymous functions or “fixing” arguments is then useful. For example, here is one way to implement clamp(x, lo, hi)
which returns x
when lo < x < hi
and otherwise lo
or hi
, depending:
Or
For heavy use of chaining through function application there are various packages to simplify the notation (e.g. Chain.jl
, Pipe.jl
, DataPipes.jl
, among many others).
State the domain and range of \(f(x) = |x + 2|\).
State the domain and range of \(f(x) = 1/(x-2)\).
Which of these functions has a domain of all real \(x\), but a range of \(x > 0\)?
Which of these commands will make a function for \(f(x) = \sin(x + \pi/3)\)?
Which of these commands will create a function for \(f(x) = (1 + x^2)^{-1}\)?
Will the following Julia
commands create a function for
\[ f(x) = \begin{cases} 30 & x < 500\\ 30 + 0.10 \cdot (x-500) & \text{otherwise.} \end{cases} \]
The expression max(0, x)
will be 0
if x
is negative, but otherwise will take the value of x
. Is this the same?
In statistics, the normal distribution has two parameters \(\mu\) and \(\sigma\) appearing as:
\[ f(x; \mu, \sigma) = \frac{1}{\sqrt{2\pi\sigma}} e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma}}. \]
Does this function implement this with the default values of \(\mu=0\) and \(\sigma=1\)?
a_normal (generic function with 1 method)
What value of \(\mu\) is used if the function is called as f(x, sigma=2.7)
?
What value of \(\mu\) is used if the function is called as f(x, mu=70)
?
What value of \(\mu\) is used if the function is called as f(x, mu=70, sigma=2.7)
?
Julia
has keyword arguments (as just illustrated) but also positional arguments. These are matched by how the function is called. For example,
when called as A(10, 5)
will use 10 for w
and 5
for h
, as the order of w
and h
matches that of 10
and 5
in the call.
This is clear enough, but in fact positional arguments can have default values (then called optional) arguments). For example,
Actually creates two functions: B(w,h)
for when the call is, say, B(10,5)
and B(w)
when the call is B(10)
.
Suppose a function C
is defined by
C (generic function with 3 methods)
This is nearly identical to the last question, save for a comma instead of a semicolon after the x
.
What value of mu
is used by the call C(1, 70, 2.7)
?
What value of mu
is used by the call C(1, 70)
?
What value of mu
is used by the call C(1)
?
Will the call C(1, mu=70)
use a value of 70
for mu
?
This function mirrors that of the built-in clamp
function:
Can you tell what it does?
Julia
has syntax for the composition of functions \(f\) and \(g\) using the Unicode operator ∘
entered as \circ[tab]
.
The notation to call a composition follows the math notation, where parentheses are necessary to separate the act of composition from the act of calling the function:
\[ (f \circ g)(x) \]
For example
What happens if you forget the extra parentheses and were to call sin ∘ cos(pi/4)
?
The parentheses in (sin ∘ cos)(pi/4)
are needed due to the order of operations, with cos(pi/4)
being evaluated first in the expression sin ∘ cos(pi/4)
. Alternatively, one can define a function sc = sin ∘ cos
(without parentheses), then call it through sc(pi/4)
.
The pipe notation ex |> f
takes the output of ex
and uses it as the input to the function f
. That is composition. What is the value of this expression 1 |> sin |> cos
?
Julia
has implemented this limited set of algebraic operations on functions: ∘
for composition and !
for negation. (Read !
as “not.”) The latter is useful for “predicate” functions (ones that return either true
or false
. What is output by this command?
Generic functions in Julia
allow many algorithms to work without change for different number types. For example, 3000 years ago, floating point numbers wouldn’t have been used to carry out the secant method computations, rather rational numbers would have been. We can see the results of using rational numbers with no change to our key function, just by starting with rational numbers for a
and b
:
secant_intersection(f, a, b) = b - f(b) * (b - a) / (f(b) - f(a)) # rewritten
f(x) = x^2 - 2
a, b = 1//1, 2//1
c = secant_intersection(f, a, b)
4//3
Now c
is 4//3
and not 1.333...
. This works as the key operations used: division, squaring, subtraction all have different implementations for rational numbers that preserve this type.
Repeat the secant method two more times to find a better approximation for \(\sqrt{2}\). What is the value of c
found?
How small is the value of \(f(c)\) for this value?
How close is this answer to the true value of \(\sqrt{2}\)?
(Finding a good approximation to \(\sqrt{2}\) would be helpful to builders, for example, as it could be used to verify the trueness of a square room, say.)
Julia
does not have surface syntax for the difference of functions. This is a common thing to want when solving equations. The tools available solve \(f(x)=0\), but problems may present as solving for \(h(x) = g(x)\) or even \(h(x) = c\), for some constant. Which of these solutions is not helpful if \(h\) and \(g\) are already defined?
Identifying the range of a function can be a difficult task. We see in this question that in some cases, a package can be of assistance.
A mathematical interval is a set of values of the form
They all contain all real numbers between the endpoints, the distinction is whether the endpoints are included or not.
A domain is some set, but typically that set is an interval such as all real numbers (\((-\infty,\infty)\)), all non-negative numbers (\([0,\infty)\)), or, say, all positive numbers (\((0,\infty)\)).
The IntervalArithmetic
package provides an easy means to define closed intervals using the symbol ..
, but this is also used by the already loaded CalculusWithJulia
package in different manner, so we use the fully qualified named constructor in the following to construct intervals:
The main feature of the package is not to construct intervals, but rather to rigorously bound with an interval the output of the image of a closed interval under a function. That is, for a function \(f\) and closed interval \([a,b]\), a bound for the set \(\{f(x) \text{ for } x \text{ in } [a,b]\}\). When [a,b]
is the domain of \(f\), then this is a bound for the range of \(f\).
For example the function \(f(x) = x^2 + 2\) had a domain of all real \(x\), the range can be found with:
For this problem, the actual range can easily be identified. Does the bound computed match exactly?
Does sin(0..pi)
exactly match the interval of \([0,1]\)?
Guess why or why not?
Now consider the evaluation
Make a graph of f
. Does the interval found above provide a nearly exact estimate of the true range (as the previous two questions have)?
Any thoughts on why?