This section illustrates a more general setting for taking derivatives, that unifies the different expositions taken prior.
Based on Bright, Edelman, and Johnson’s notes
This section has essentially no original contribution, as it basically samples material from the notes Matrix Calculus (for Machine Learning and Beyond) by Paige Bright, Alan Edelman, and Steven G. Johnson. Their notes cover material taught in a course at MIT. Support materials for their course in Julia are available at https://github.com/mitmath/matrixcalc/tree/main. For more details and examples, please refer to the source.
60.1 Review
We have seen several “derivatives” of a function, based on the number of inputs and outputs. The first one was for functions \(f: R \rightarrow R\).
In this case, we saw that \(f\) has a derivative at \(c\) if this limit exists:
linearization writes \(f(x+\Delta x) - f(x) \approx f'(x)\Delta x\), where \(\Delta x\) is a small displacement from \(x\). The reason there isn’t equality is the unwritten higher order terms that vanish in a limit.
Alternate limits. Another way of writing this is in terms of explicit smaller order terms:
\[
(f(x+h) - f(x)) - f'(x)h = \mathscr{o}(h),
\]
which means if we divide both sides by \(h\) and take the limit, we will get \(0\) on the right and the relationship on the left.
Differential notation simply writes this as \(dy = f'(x)dx\). Focusing on \(f\) and not \(y=f(x)\), we might write
\[
df = f(x+dx) - f(x) = f'(x) dx.
\]
In the above, \(df\) and \(dx\) are differentials, made rigorous by a limit, which hides the higher order terms.
We will see all the derivatives encountered so far can be similarly expressed as this last characterization.
60.1.1 Univariate, vector-valued
For example, when \(f: R \rightarrow R^m\) was a vector-valued function the derivative was defined similarly through a limit of \((f(t + \Delta t) - f(t))/{\Delta t}\), where each component needed to have a limit. This can be rewritten through \(f(t + dt) - f(t) = f'(t) dt\), again using differentials to avoid the higher order terms.
60.1.2 Multivariate, scalar-valued
When \(f: R^n \rightarrow R\) is a scalar-valued function with vector inputs, differentiability was defined by a gradient existing with \(f(c+h) - f(c) - \nabla{f}(c) \cdot h\) being \(\mathscr{o}(\|h\|)\). In other words \(df = f(c + dh) - f(c) = \nabla{f}(c) \cdot dh\). The gradient has the same shape as \(c\), a column vector. If we take the row vector (e.g. \(f'(c) = \nabla{f}(c)^T\)) then again we see \(df = f(c+dh) - f(c) = f'(c) dh\), where the last term uses matrix multiplication of a row vector times a column vector.
60.1.3 Multivariate, vector-valued
Finally, when \(f:R^n \rightarrow R^m\), the Jacobian was defined and characterized by \(\| f(x + dx) - f(x) - J_f(x)dx \|\) being \(\mathscr{o}(\|dx\|)\). Again, we can express this through \(df = f(x + dx) - f(x) = f'(x)dx\) where \(f'(x) = J_f(x)\).
60.1.4 Vector spaces
The generalization of the derivative involves linear operators which are defined for vector spaces.
A vector space is a set of mathematical objects which can be added together and also multiplied by a scalar. Vectors of similar size, as previously discussed, are the typical example, with vector addition and scalar multiplication already defined. Matrices of similar size (and some subclasses) also form a vector space.
Additionally, many other set of objects form vector spaces. Certain families of functions form examples such as: polynomial functions of degree \(n\) or less; continuous functions, or functions with a certain number of derivatives. The last two are infinite dimensional; our focus here is on finite dimensional vector spaces.
Let’s take differentiable functions as an example. These form a vector space as the derivative of a linear combination of differentiable functions is defined through the simplest derivative rule: \([af(x) + bg(x)]' = a[f(x)]' + b[g(x)]'\). If \(f\) and \(g\) are differentiable, then so is \(af(x)+bg(x)\).
A finite vector space is described by a basis—a minimal set of vectors needed to describe the space, after consideration of linear combinations. For some typical vector spaces, this is the set of special vectors with \(1\) as one of the entries, and \(0\) otherwise.
A key fact about a basis for a finite vector space is every vector in the vector space can be expressed uniquely as a linear combination of the basis vectors. The set of numbers used in the linear combination, along with an order to the basis, means an element in a finite vector space can be associated with a unique coordinate vector.
Vectors and matrices have properties that are generalizations of the real numbers. As vectors and matrices form vector spaces, the concept of addition of vectors and matrices is defined, as is scalar multiplication. Additionally, we have seen:
The dot product between two vectors of the same length is defined easily (\(v\cdot w = \Sigma_i v_i w_i\)). It is coupled with the length as \(\|v\|^2 = v\cdot v\).
Matrix multiplication is defined for two properly sized matrices. If \(A\) is \(m \times k\) and \(B\) is \(k \times n\) then \(AB\) is a \(m\times n\) matrix with \((i,j)\) term given by the dot product of the \(i\)th row of \(A\) (viewed as a vector) and the \(j\)th column of \(B\) (viewed as a vector). Matrix multiplication is associative but not commutative. (E.g. \((AB)C = A(BC)\) but \(AB\) and \(BA\) need not be equal, or even defined, as the shapes may not match up.)
A square matrix \(A\) has an inverse\(A^{-1}\) if \(AA^{-1} = A^{-1}A = I\), where \(I\) is the identity matrix (a matrix which is zero except on its diagonal entries, which are all \(1\)). Square matrices may or may not have an inverse. A matrix without an inverse is called singular.
Viewing a vector as a matrix is possible. The association chosen here is common and is through a column vector.
The transpose of a matrix comes by permuting the rows and columns. The transpose of a column vector is a row vector, so \(v\cdot w = v^T w\), where we use a superscript \(T\) for the transpose. The transpose of a product, is the product of the transposes—reversed: \((AB)^T = B^T A^T\); the transpose of a transpose is an identity operation: \((A^T)^T = A\); the inverse of a transpose is the transpose of the inverse: \((A^{-1})^T = (A^T)^{-1}\).
Matrices for which \(A = A^T\) are called symmetric.
The adjoint of a matrix is related to the transpose, only complex conjugates are also taken. When a matrix has real components, the adjoint and transpose are identical operations.
The trace of a square matrix is just the sum of its diagonal terms
The determinant of a square matrix is involved to compute, but was previously seen to have a relationship to the volume of a certain parallellpiped.
These operations have different inputs and outputs: the determinant and trace take a (square) matrix and return a scalar; the inverse takes a square matrix and returns a square matrix (when defined); the transpose and adjoint take a rectangular matrix and return a rectangular matrix.
In addition to these, there are a few other key operations on matrices described in the following.
60.1.5 Linear operators
The Bright, Edelman, and Johnson (2025) notes cover differentiation of functions in this uniform manner extending the form by treating derivatives more generally as linear operators.
where the \(\alpha\) and \(\beta\) are scalars, and \(v\) and \(w\) come from a vector space.
Taking the real numbers as a vector space, then regular multiplication is a linear operation, as \(c \cdot (ax + by) = a\cdot(cx) + b\cdot(cy)\) using the distributive and commutative properties.
Taking \(n\)-dimensional vectors as vector space, matrix multiplication by an \(n \times n\) matrix on the left will be a linear operator as \(M(av + bw) = a(Mv) + b(Mw)\), using distribution and the commutative properties of scalar multiplication.
We saw differential functions form a vector space, the derivative is a linear operator, as \([af(x) + bg(x)]' = af'(x) + bg'(x)\).
The use of []
The referenced notes identify \(f'(x) dx\) as \(f'(x)[dx]\), the latter emphasizing \(f'(x)\) acts on \(dx\) and the notation is not commutative (e.g., it is not \(dx f'(x)\)). The use of \([]\) is to indicate that \(f'(x)\) “acts” on \(dx\) in a linear manner. It may be multiplication, matrix multiplication, or something else. Parentheses are not used which might imply function application or multiplication.
60.2 The derivative as a linear operator
We take the view that a derivative is a linear operator where \(df = f(x+dx) + f(x) = f'(x)[dx]\).
In writing \(df = f(x + dx) - f(x) = f'(x)[dx]\) generically, some underlying facts are left implicit: \(dx\) has the same shape as \(x\) (so can be added) and there is an underlying concept of distance and size that allows the above to be made rigorous. This may be an absolute value or a norm.
Example: directional derivatives
Suppose \(f: R^n \rightarrow R\), a scalar-valued function of a vector. Then the directional derivative at \(x\) in the direction \(v\) was defined for a scalar \(\alpha\) by:
using linearity to move the scalar multiplication by \(d\alpha\) outside the action of the linear operator. This connects the partial derivative at \(x\) in the direction of \(v\) with \(f'(x)\):
Not only does this give a connection in notation with the derivative, it naturally illustrates how the derivative as a linear operator can act on non-infinitesimal values, in this case on \(v\).
Previously, we wrote \(\nabla f \cdot v\) for the directional derivative, where the gradient is a column vector.
The above uses the identification \(f' = (\nabla f)^T\). For \(f: R^n \rightarrow R\) we have \(df = f(x + dx) - f(x) = f'(x) [dx]\) is a scalar, so if \(dx\) is a column vector, \(f'(x)\) is a row vector with the same number of components (just as \(\nabla f\) is a column vector with the same number of components). The operation \(f'(x)[dx]\) is just matrix multiplication, which is a linear operation.
Example: derivative of a matrix expression
Bright, Edelman, and Johnson (2025) include this example to show that the computation of derivatives using components can be avoided. Consider \(f(x) = x^T A x\) where \(x\) is a vector in \(R^n\) and \(A\) is an \(n\times n\) matrix. This type of expression is common.
Then \(f: R^n \rightarrow R\) and its derivative can be computed:
\[
\begin{align*}
df &= f(x + dx) - f(x)\\
&= (x + dx)^T A (x + dx) - x^TAx \\
&= \textcolor{blue}{x^TAx} + dx^TA x + x^TAdx + \textcolor{red}{dx^T A dx} - \textcolor{blue}{x^TAx}\\
&= dx^TA x + x^TAdx \\
&= (dx^TAx)^T + x^TAdx \\
&= x^T A^T dx + x^T A dx\\
&= x^T(A^T + A) dx
\end{align*}
\]
The term \(dx^t A dx\) is dropped, as it is higher order (goes to zero faster), it containing two \(dx\) terms. In the second to last step, an identity operation (taking the transpose of the scalar quantity) is taken to simplify the algebra. Finally, as \(df = f'(x)[dx]\) the identity of \(f'(x) = x^T(A^T+A)\) is made, or taking transposes \(\nabla f(x) = (A + A^T)x\).
Compare the elegance above with the component version, even though simplified, which still requires a specification of the size to carry the following out:
usingSymPy@syms x[1:3]::real A[1:3, 1:3]::realu = x'* A * xgrad_u = [diff(u, xi) for xi in x]
For \(f: R^n \rightarrow R^m\), Bright, Edelman, and Johnson (2025) give an example of computing the Jacobian without resorting to component wise computations. Let \(f(x) = Ax\) with \(A\) being a \(m \times n\) matrix, it follows that
The Jacobian is the linear operator \(A\) acting on \(dx\). (Seeing that \(Adx = f'(x)[dx]\) implies \(f'(x)=A\) comes as this action is true for any \(dx\), hence the actions must be the same.)
60.3 Differentation rules
Various differentiation rules are still available such as the sum, product, and chain rules.
60.3.1 Sum and product rules for the derivative
Using the differential notation—which implicitly ignores higher order terms as they vanish in a limit—the sum and product rules can be derived.
For the sum rule, let \(f(x) = g(x) + h(x)\). Then
Comparing we get \(f'(x)[dx] = (g'(x) + h'(x))[dx]\) or \(f'(x) = g'(x) + h'(x)\). (The last two lines above show how the new linear operator \(g'(x) + h'(x)\) is defined on a value, but adding the application for each.
The sum rule has the same derivation as was done with univariate, scalar functions. Similarly for the product rule.
The product rule for \(f(x) = g(x)h(x)\) comes as:
In the second line the transpose of the scalar quantity \(x^TAdx\) it taken to simplify the expression and the first calculation is used.
When \(A^T = A\) (\(A\) is symmetric) this simplifies to a more familiar looking \(2x^TA\), but we see that this requires assumptions not needed in the scalar case.
Example
Bright, Edelman, and Johnson (2025) consider what in Julia is .*. That is the operation:
They compute the derivative of \(f(x) = A(x .* x)\) for some fixed matrix \(A\) of the proper size.
We can see by the product rule that \(d (\text{diag}(v)w) = d(\text{diag}(v)) w + \text{diag}(v) dw = (dx) .* w + x .* dw\). So
\(df = A(dx .* x + x .* dx) = 2A(x .* dx)\), as \(.*\) is commutative by its definition. Writing this as \(df = 2A(x .* dx) = 2A(\text{diag}(x) dx) = (2A\text{diag}(x)) dx\), we identify \(f'(x) = 2A\text{diag}(x)\).
This operation is called the Hadamard product and it extends to matrices and arrays.
Numerator layout
The Wikipedia page on matrix calculus has numerous such “identities” for derivatives of different common matrix/vector expressions. As vectors are viewed as column vectors; the “numerator layout” identities apply.
60.3.2 The chain rule
Like the product rule, the chain rule is shown by Bright, Edelman, and Johnson (2025) in this notation with \(f(x) = g(h(x))\):
The operator \(f'(x)= g'(h(x)) h'(x)\) is a product of matrices.
60.3.3 Computational differences with expressions from the chain rule
Of note here is the application of the chain rule to three (or more compositions) where \(c:R^n \rightarrow R^j\), \(b:R^j \rightarrow R^k\), and \(a:R^k \rightarrow R^m\):
If \(f(x) = a(b(c(x)))\) then the derivative is:
\[
f'(x) = a'(b(c(x))) b'(c(x)) c'(x),
\]
which can be expressed as three matrix multiplications two ways:
\[
f' = (a'b')c' \text{ or } f' = a'(b'c')
\]
Multiplying left to right (the first) is called reverse mode; multiplying right to left (the second) is called forward mode. The distinction becomes important when considering the computational cost of the multiplications.
If \(f: R^n \rightarrow R^m\) has \(n\) much bigger than \(1\) and \(m=1\), then it is much faster to do left to right multiplication (many more inputs than outputs)
if \(f:R^n \rightarrow R^m\) has \(n=1\) and \(m\) much bigger than one, the it is faster to do right to left multiplication (many outputs than inputs)
The reason comes down to the shape of the matrices. To see, we need to know that matrix multiplication of an \(m \times q\) matrix times a \(q \times n\) matrix takes an order of \(mqn\) operations.
When \(m=1\), the derivative is a product of matrices of size \(n\times j\), \(j\times k\), and \(k \times 1\) yielding a matrix of size \(n \times 1\) matching the function dimension.
The operations involved in multiplication from left to right can be quantified. The first operation takes \(njk\) operation leaving an \(n\times k\) matrix, the next multiplication then takes another \(nk1\) operations or \(njk + nk\) together.
Whereas computing from the right to left is first \(jk1\) operations leaving a \(j \times 1\) matrix. The next operation would take another \(nk1\) operations. In total:
left to right is \(njk + nk\) = \(nk \cdot (j + 1)\).
right to left is \(jk + jn = j\cdot (k+n)\).
When \(j=k\), say, we can compare and see the second is a factor less in terms of operations. This can be quite significant in higher dimensions.
Example
Using the BenchmarkTools package, we can check the time to compute various products:
In calculus, we typically have \(n\) and \(m\) are \(1\), \(2\),or \(3\). But that need not be the case, especially if differentiation is over a parameter space.
60.4 Derivatives of matrix functions
What is the the derivative of \(f(A) = A^2\)?
The function \(f\) takes a \(n\times n\) matrix and returns a matrix of the same size.
This derivative can be derived directly from the product rule:
\[
\begin{align*}
df &= d(A^2) = d(AA)\\
&= dA A + A dA
\end{align*}
\]
That is \(f'(A)\) is the operator \(f'(A)[\delta A] = A \delta A + \delta A A\). (This is not \(2A\delta A\), as \(A\) may not commute with \(\delta A\).)
60.4.1 Vectorization of a matrix
Alternatively, we can identify \(A\) through its components, as a vector in \(R^{n^2}\) and then leverage the Jacobian.
One such identification is vectorization—consecutively stacking the column vectors into a single vector. In Julia the vec function does this operation:
We do this via linear algebra first, then see a more elegant manner following the notes.
A course in linear algebra shows that any linear operator on a finite vector space can be represented as a matrix. The basic idea is to represent what the operator does to each basis element and put these values as columns of the matrix.
In this \(3 \times 3\) case, the linear operator works on an object with \(9\) slots and returns an object with \(9\) slots, so the matrix will be \(9 \times 9\).
The basis elements are simply the matrices with a \(1\) in spot \((i,j)\) and zero elsewhere. Here we generate them through a function:
basis(i,j,A) = (b=zeros(Int, size(A)...); b[i,j] =1; b)JJ = [vec(basis(i,j,A)*A +A*basis(i,j,A)) for j in1:3 for i in1:3]
But how can we see the Jacobian, \(J\), from the linear operator \(f'(A)[\delta A] = \delta A A + A \delta A\)?
To make this less magical, a related operation to vec is defined.
The \(\text{vec}\) function takes a matrix and stacks its columns.
The \(\text{vec}\) function can turn a matrix into a vector, so it can be used for finding the Jacobian, as above. However the shape of the matrix is lost, as are the fundamental matrix operations, like multiplication.
The Kronecker product replicates values making a bigger matrix. That is, if \(A\) and \(B\) are matrices, the Kronecker product replaces each value in \(A\) with that value times \(B\), making a bigger matrix, as each entry in \(A\) is replaced by an entry with size \(B\).
The \(m\times n\) matrix \(A\) and \(j \times k\) matrix \(B\) has a Kronecker product with size \(mj \times nk\).
The Kronecker product has a certain algebra, including:
transposes: \((A \otimes B)^T = A^T \otimes B^T\)
orthogonal: \((A\otimes B)^T = (A\otimes B)\) if both \(A\) and \(B\) has the same property
trace (sum of diagonal): \(\text{tr}(A \otimes B) = \text{tr}(A)\text{tr}(B)\).* determinants: \(\det(A\otimes B) = \det(A)^m \det(B)^n\), where \(A\) is \(n\times n\), \(B\) is \(m \times m\).
The main equation coupling vec and kron is the fact that if \(A\), \(B\), and \(C\) have appropriate sizes, then:
\[
(A \otimes B) \text{vec}(C) = \text{vec}(B C A^T).
\]
Appropriate sizes for \(A\), \(B\), and \(C\) are determined by the various products in \(BCA^T\).
If \(A\) is \(m \times n\) and \(B\) is \(r \times s\), then since \(BC\) is defined, \(C\) has \(s\) rows, and since \(CA^T\) is defined, \(C\) must have \(n\) columns, as \(A^T\) is \(n \times m\), so \(C\) must be \(s\times n\). Checking this is correct on the other side, \(A \times B\) would be size \(mr \times ns\) and \(\vec{C}\) would be size \(sn\), so that product works, size wise.
The referred to notes have an explanation for this formula, but we only confirm it with an example using \(m=n=2\) and \(r=s=3\):
@syms A[1:2, 1:2]::real B[1:3, 1:3]::real C[1:3, 1:2]::realL, R =kron(A,B)*vec(C), vec(B*C*A')all(l == r for (l, r) ∈zip(L, R))
true
Now to use this relationship to recognize \(df = A dA + dA A\) with the Jacobian computed from \(\text{vec}(f(a))\).
We have \(\text{vec}(A dA + dA A) = \text{vec}(A dA) + \text{vec}(dA A)\), by obvious linearity of \(\text{vec}\). Now inserting an identity matrix, \(I\), which is symmteric, in a useful spot we have:
\[
\text{vec}(A dA) = \text{vec}(A dA I^T) = (I \otimes A) \text{vec}(dA),
\]
and
\[
\text{vec}(dA A) = \text{vec}(I dA (A^T)^T) = (A^T \otimes I) \text{vec}(dA).
\]
This leaves
\[
\text{vec}(A dA + dA A) =
\left((I \otimes A) + (A^T \otimes I)\right) \text{vec}(dA)
\]
We should then get the Jacobian we computed from the following:
@syms A[1:3, 1:3]::realusingLinearAlgebra: IJ =vec(A^2).jacobian(vec(A))JJ =kron(I(3), A) +kron(A', I(3))all(j == jj for (j,jj) inzip(J,JJ))
true
This technique can also be used with other powers, say \(f(A) = A^3\), where the resulting \(df = A^2 dA + A dA A + dA A^2\) is one answer that can be compared to a Jacobian through
\[
\begin{align*}
df &= \text{vec}(A^2 dA I^T) + \text{vec}(A dA A) + \text{vec}(I dA A^2)\\
&= (I \otimes A^2)\text{vec}(dA) + (A^T \otimes A) \text{vec}(dA) + ((A^T)^2 \otimes I) \text{vec}(dA)
\end{align*}
\]
The above shows how to relate the derivative of a matrix function to the Jacobian of a vectorized function, but only for illustration. It is certainly not necessary to express the derivative of \(f\) in terms of the derivative of its vectorized counterpart.
Example: derivative of the matrix inverse
What is the derivative of \(f(A) = A^{-1}\)? The same technique used to find the derivative of the inverse of a univariate, scalar-valued function is useful. Starting with \(I = AA^{-1}\) and noting \(dI\) is \(0\) we have
Let \(f(A) = \text{det}(A)\). What is the derivative?
First, the determinant of a square, \(n\times n\), matrix \(A\) is a scalar summary of \(A\). There are different means to compute the determinant, but this recursive one in particular is helpful here:
for any \(j\). The cofactor\(C_{ij}\) is the determinant of the \((n-1)\times(n-1)\) matrix with the \(i\)th row and \(j\)th column deleted times \((-1)^{i+j}\).
To find the gradient of \(f\), we differentiate by each of the \(A_{ij}\) variables, and so
The chain rule brings about a series of products. The adjoint method illustrated by Bright, Edelman, and Johnson (2025) and summarize below, shows how to approach the computation of the series in a direction that minimizes the computational cost, illustrating why reverse mode is preferred to forward mode when a scalar function of several variables is considered.
Bright, Edelman, and Johnson (2025) consider the derivative of
\[
g(p) = f(A(p)^{-1} b)
\]
This might arise from applying a scalar-valued \(f\) to the solution of \(Ax = b\), where \(A\) is parameterized by \(p\). The number of parameters might be quite large, so how the resulting computation is organized might effect the computational costs.
The chain rule gives the following computation to find the derivative (or gradient):
By setting \(v^T = f'(x)A(p)^{-1}\) and writing \(x = A(p)^{-1}[b]\) this becomes
\[
dg = -v^T dA x.
\]
This product of three terms can be computed in two directions:
From left to right:
First \(v\) is found by solving \(v^T = f'(x) A^{-1}\) through the solving of \(v = (A^{-1})^T (f'(x))^T = (A^T)^{-1} \nabla(f)\) or by solving \(A^T v = \nabla f\). This is called the adjoint equation.
The partial derivatives in \(p\) of \(g\) are related to each partial derivative of \(dA\) through:
as the scalar factor commutes through. With \(v\) and \(x\) solved for (via the adjoint equation and from solving \(Ax=b\)) the partials in \(p_k\) are computed with dot products. There are just two costly operations.
From right to left:
The value of \(x\) can be solved for, as above, but computing the value of
requires a costly solve of \(A^{-1}\frac{\partial A}{\partial p_k} x\) for each \(p_k\), and \(p\) may have many components. This is the difference: left to right only has the solve of the one adjoint equation.
As mentioned above, the reverse mode offers advantages when there are many input parameters (\(p\)) and a single output parameter.
Example
Suppose \(x(p)\) solves some system of equations \(h(x(p),p) = 0\) in \(R^n\) (\(n\) possibly just \(1\)) and \(g(p) = f(x(p))\) is some non-linear transformation of \(x\). What is the derivative of \(g\) in \(p\)?
Suppose the implicit function theorem applies to \(h(x,p) = 0\), that is locally the response \(x(p)\) has a derivative, and moreover by the chain rule
Here \(v\) can be solved for by taking adjoints (as before). Let \(A = \partial h/\partial x\), then \(v^T = f'(x) A^{-1}\) or \(v = (A^{-1})^T (f'(x))^t= (A^T)^{-1} \nabla f\). That is \(v\) solves \(A^Tv=\nabla f\). As before it would take two solves to get both \(g\) and its gradient.
60.6 Second derivatives, Hessian
We reference a theorem presented by Carlsson et al. (2025) for exposition with some modification
Theorem 1. Let \(f:X \rightarrow Y\), where \(X,Y\) are finite dimensional inner product spaces with elements in \(R\). Suppose \(f\) is smooth (a certain number of derivatives). Then for each \(x\) in \(X\) there exists a unique linear operator, \(f'(x)\), and a unique bilinearsymmetric operator \(f'': X \oplus X \rightarrow Y\) such that
New terms include bilinear, symmetric, and inner product. An operator (\(X\oplus X \rightarrow Y\)) is bilinear if it is a linear operator in each of its two arguments. Such an operator is symmetric if interchanging its two arguments makes no difference in its output. Finally, an inner product space is one with a generalization of the dot product. An inner product takes two vectors \(x\) and \(y\) and returns a scalar; it is denoted \(\langle x,y\rangle\); and has properties of symmetry, linearity, and non-negativity (\(\langle x,x\rangle \geq 0\), and equal \(0\) only if \(x\) is the zero vector.) Inner products can be used to form a norm (or length) for a vector through \(||x||^2 = \langle x,x\rangle\).
We reference this, as the values denoted \(f'\) and \(f''\) are unique. So if we identify them one way, we have identified them.
Specializing to \(X=R^n\) and \(Y=R^1\), we have, \(f'=\nabla f^T\) and \(f''\) is the Hessian.
Take \(n=2\). Previously we wrote a formula for Taylor’s theorem for \(f:R^n \rightarrow R\) that with \(n=2\) has with \(x=\langle x_1,x_2\rangle\):
\(H\) being the Hessian with entries \(H_{ij} = \frac{\partial f}{\partial x_i \partial x_j}\).
This formula—\(f(x+dx)-f(x) \approx f'(x)dx + dx^T H dx\)—is valid for any \(n\), showing \(n=2\) was just for ease of notation when expressing in the coordinates and not as matrices.
By uniqueness, we have under these assumptions that the Hessian is symmetric and the expression \(dx^T H dx\) is a bilinear form, which we can identify as \(f''(x)[dx,dx]\).
That the Hessian is symmetric could also be derived under these assumptions by directly computing that the mixed partials can have their order exchanged. But in this framework, as explained by Bright, Edelman, and Johnson (2025) (and shown later) it is a result of the underlying vector space having an addition that is commutative (e.g. \(u+v = v+u\)).
The mapping \((u,v) \rightarrow u^T A v\) for a matrix \(A\) is bilinear. For a fixed \(u\), it is linear as it can be viewed as \((u^TA)[v]\) and matrix multiplication is linear. Similarly for a fixed \(v\).
Bright, Edelman, and Johnson (2025) extend this characterization to a broader setting.
We have for some function \(f\)
\[
df = f(x + dx) - f(x) = f'(x)[dx]
\]
Then if \(d\tilde{x}\) is another differential change with the same shape as \(x\) we can look at the differential of \(f'(x)\):
The last equality a definition. As \(f''\) is linear in the the application to \(d\tilde{x}\) and also linear in application to \(dx\), \(f''(x)\) is a bilinear operator.
So \(f''(x)[d\tilde{x},dx] = f''(x)[dx, d\tilde{x}]\). The key is the commutivity of vector addition to say \(dx + d\tilde{x} = d\tilde{x} + dx\) in the third line.
Example: Hessian is symmetric
As mentioned earlier, the Hessian is the matrix arising from finding the second derivative of a multivariate, scalar-valued function \(f:R^n \rightarrow R\). As a bilinear form on a finite vector space, it can be written as \(\tilde{x}^T A x\). As this second derivative is symmetric, and this value above a scalar, it follows that \(\tilde{x}^T A x = \tilde{x}^T A^T x\). That is \(H = A\) must also be symmetric from general principles.
Example: second derivative of \(x^TAx\)
Consider an expression from earlier \(f(x) = x^T A x\) for some constant \(A\).
We have seen that \(f' = (\nabla f)^T = x^T(A+A^T)\). That is \(\nabla f = (A^T+A)x\) is linear in \(x\). The Jacobian of \(\nabla f\) is the Hessian, \(H = f'' = A + A^T\).
Example: second derivative of \(\text{det}(A)\)
Consider \(f(A) = \text{det}(A)\). We saw previously that:
Bright, Paige, Alan Edelman, and Steven G. Johnson. 2025. “Matrix Calculus (for Machine Learning and Beyond).”https://arxiv.org/abs/2501.14787.
Carlsson, Marcus, Viktor Nikitin, Erik Troedsson, and Herwig Wendt. 2025. “The Bilinear Hessian for Large Scale Optimization.”https://arxiv.org/abs/2502.03070.