60  Matrix calculus

This section illustrates a more general setting for taking derivatives, that unifies the different expositions taken prior.

Based on Bright, Edelman, and Johnson’s notes

This section has essentially no original contribution, as it basically samples material from the notes Matrix Calculus (for Machine Learning and Beyond) by Paige Bright, Alan Edelman, and Steven G. Johnson. Their notes cover material taught in a course at MIT. Support materials for their course in Julia are available at https://github.com/mitmath/matrixcalc/tree/main. For more details and examples, please refer to the source.

60.1 Review

We have seen several “derivatives” of a function, based on the number of inputs and outputs. The first one was for functions \(f: R \rightarrow R\).

In this case, we saw that \(f\) has a derivative at \(c\) if this limit exists:

\[ \lim_{h \rightarrow 0}\frac{f(c + h) - f(c)}{h}. \]

The derivative as a function of \(x\) uses this rule for any \(x\) in the domain.

Common notation is:

\[ f'(x) = \frac{dy}{dx} = \lim_{h \rightarrow 0}\frac{f(x + h) - f(x)}{h} \]

(when the limit exists).

This limit gets re-expressed in different ways:

  • linearization writes \(f(x+\Delta x) - f(x) \approx f'(x)\Delta x\), where \(\Delta x\) is a small displacement from \(x\). The reason there isn’t equality is the unwritten higher order terms that vanish in a limit.

  • Alternate limits. Another way of writing this is in terms of explicit smaller order terms:

\[ (f(x+h) - f(x)) - f'(x)h = \mathscr{o}(h), \]

which means if we divide both sides by \(h\) and take the limit, we will get \(0\) on the right and the relationship on the left.

  • Differential notation simply writes this as \(dy = f'(x)dx\). Focusing on \(f\) and not \(y=f(x)\), we might write

\[ df = f(x+dx) - f(x) = f'(x) dx. \]

In the above, \(df\) and \(dx\) are differentials, made rigorous by a limit, which hides the higher order terms.

We will see all the derivatives encountered so far can be similarly expressed as this last characterization.

60.1.1 Univariate, vector-valued

For example, when \(f: R \rightarrow R^m\) was a vector-valued function the derivative was defined similarly through a limit of \((f(t + \Delta t) - f(t))/{\Delta t}\), where each component needed to have a limit. This can be rewritten through \(f(t + dt) - f(t) = f'(t) dt\), again using differentials to avoid the higher order terms.

60.1.2 Multivariate, scalar-valued

When \(f: R^n \rightarrow R\) is a scalar-valued function with vector inputs, differentiability was defined by a gradient existing with \(f(c+h) - f(c) - \nabla{f}(c) \cdot h\) being \(\mathscr{o}(\|h\|)\). In other words \(df = f(c + dh) - f(c) = \nabla{f}(c) \cdot dh\). The gradient has the same shape as \(c\), a column vector. If we take the row vector (e.g. \(f'(c) = \nabla{f}(c)^T\)) then again we see \(df = f(c+dh) - f(c) = f'(c) dh\), where the last term uses matrix multiplication of a row vector times a column vector.

60.1.3 Multivariate, vector-valued

Finally, when \(f:R^n \rightarrow R^m\), the Jacobian was defined and characterized by \(\| f(x + dx) - f(x) - J_f(x)dx \|\) being \(\mathscr{o}(\|dx\|)\). Again, we can express this through \(df = f(x + dx) - f(x) = f'(x)dx\) where \(f'(x) = J_f(x)\).

60.1.4 Vector spaces

The generalization of the derivative involves linear operators which are defined for vector spaces.

A vector space is a set of mathematical objects which can be added together and also multiplied by a scalar. Vectors of similar size, as previously discussed, are the typical example, with vector addition and scalar multiplication already defined. Matrices of similar size (and some subclasses) also form a vector space.

Additionally, many other set of objects form vector spaces. Certain families of functions form examples such as: polynomial functions of degree \(n\) or less; continuous functions, or functions with a certain number of derivatives. The last two are infinite dimensional; our focus here is on finite dimensional vector spaces.

Let’s take differentiable functions as an example. These form a vector space as the derivative of a linear combination of differentiable functions is defined through the simplest derivative rule: \([af(x) + bg(x)]' = a[f(x)]' + b[g(x)]'\). If \(f\) and \(g\) are differentiable, then so is \(af(x)+bg(x)\).

A finite vector space is described by a basis—a minimal set of vectors needed to describe the space, after consideration of linear combinations. For some typical vector spaces, this is the set of special vectors with \(1\) as one of the entries, and \(0\) otherwise.

A key fact about a basis for a finite vector space is every vector in the vector space can be expressed uniquely as a linear combination of the basis vectors. The set of numbers used in the linear combination, along with an order to the basis, means an element in a finite vector space can be associated with a unique coordinate vector.

Vectors and matrices have properties that are generalizations of the real numbers. As vectors and matrices form vector spaces, the concept of addition of vectors and matrices is defined, as is scalar multiplication. Additionally, we have seen:

  • The dot product between two vectors of the same length is defined easily (\(v\cdot w = \Sigma_i v_i w_i\)). It is coupled with the length as \(\|v\|^2 = v\cdot v\).

  • Matrix multiplication is defined for two properly sized matrices. If \(A\) is \(m \times k\) and \(B\) is \(k \times n\) then \(AB\) is a \(m\times n\) matrix with \((i,j)\) term given by the dot product of the \(i\)th row of \(A\) (viewed as a vector) and the \(j\)th column of \(B\) (viewed as a vector). Matrix multiplication is associative but not commutative. (E.g. \((AB)C = A(BC)\) but \(AB\) and \(BA\) need not be equal, or even defined, as the shapes may not match up.)

  • A square matrix \(A\) has an inverse \(A^{-1}\) if \(AA^{-1} = A^{-1}A = I\), where \(I\) is the identity matrix (a matrix which is zero except on its diagonal entries, which are all \(1\)). Square matrices may or may not have an inverse. A matrix without an inverse is called singular.

  • Viewing a vector as a matrix is possible. The association chosen here is common and is through a column vector.

  • The transpose of a matrix comes by permuting the rows and columns. The transpose of a column vector is a row vector, so \(v\cdot w = v^T w\), where we use a superscript \(T\) for the transpose. The transpose of a product, is the product of the transposes—reversed: \((AB)^T = B^T A^T\); the transpose of a transpose is an identity operation: \((A^T)^T = A\); the inverse of a transpose is the transpose of the inverse: \((A^{-1})^T = (A^T)^{-1}\).

  • Matrices for which \(A = A^T\) are called symmetric.

  • The adjoint of a matrix is related to the transpose, only complex conjugates are also taken. When a matrix has real components, the adjoint and transpose are identical operations.

  • The trace of a square matrix is just the sum of its diagonal terms

  • The determinant of a square matrix is involved to compute, but was previously seen to have a relationship to the volume of a certain parallellpiped.

These operations have different inputs and outputs: the determinant and trace take a (square) matrix and return a scalar; the inverse takes a square matrix and returns a square matrix (when defined); the transpose and adjoint take a rectangular matrix and return a rectangular matrix.

In addition to these, there are a few other key operations on matrices described in the following.

60.1.5 Linear operators

The Bright, Edelman, and Johnson (2025) notes cover differentiation of functions in this uniform manner extending the form by treating derivatives more generally as linear operators.

A linear operator is a mathematical object which satisfies

\[ f[\alpha v + \beta w] = \alpha f[v] + \beta f[w]. \]

where the \(\alpha\) and \(\beta\) are scalars, and \(v\) and \(w\) come from a vector space.

Taking the real numbers as a vector space, then regular multiplication is a linear operation, as \(c \cdot (ax + by) = a\cdot(cx) + b\cdot(cy)\) using the distributive and commutative properties.

Taking \(n\)-dimensional vectors as vector space, matrix multiplication by an \(n \times n\) matrix on the left will be a linear operator as \(M(av + bw) = a(Mv) + b(Mw)\), using distribution and the commutative properties of scalar multiplication.

We saw differential functions form a vector space, the derivative is a linear operator, as \([af(x) + bg(x)]' = af'(x) + bg'(x)\).

The use of []

The referenced notes identify \(f'(x) dx\) as \(f'(x)[dx]\), the latter emphasizing \(f'(x)\) acts on \(dx\) and the notation is not commutative (e.g., it is not \(dx f'(x)\)). The use of \([]\) is to indicate that \(f'(x)\) “acts” on \(dx\) in a linear manner. It may be multiplication, matrix multiplication, or something else. Parentheses are not used which might imply function application or multiplication.

60.2 The derivative as a linear operator

We take the view that a derivative is a linear operator where \(df = f(x+dx) + f(x) = f'(x)[dx]\).

In writing \(df = f(x + dx) - f(x) = f'(x)[dx]\) generically, some underlying facts are left implicit: \(dx\) has the same shape as \(x\) (so can be added) and there is an underlying concept of distance and size that allows the above to be made rigorous. This may be an absolute value or a norm.

Example: directional derivatives

Suppose \(f: R^n \rightarrow R\), a scalar-valued function of a vector. Then the directional derivative at \(x\) in the direction \(v\) was defined for a scalar \(\alpha\) by:

\[ \frac{\partial}{\partial \alpha}f(x + \alpha v) \mid_{\alpha = 0} = \lim_{\Delta\alpha \rightarrow 0} \frac{f(x + \Delta\alpha v) - f(x)}{\Delta\alpha}. \]

This rate of change in the direction of \(v\) can be expressed through the linear operator \(f'(x)\) via

\[ df = f(x + d\alpha v) - f(x) = f'(x) [d\alpha v] = d\alpha f'(x)[v], \]

using linearity to move the scalar multiplication by \(d\alpha\) outside the action of the linear operator. This connects the partial derivative at \(x\) in the direction of \(v\) with \(f'(x)\):

\[ \frac{\partial}{\partial \alpha}f(x + \alpha v) \mid_{\alpha = 0} = f'(x)[v]. \]

Not only does this give a connection in notation with the derivative, it naturally illustrates how the derivative as a linear operator can act on non-infinitesimal values, in this case on \(v\).

Previously, we wrote \(\nabla f \cdot v\) for the directional derivative, where the gradient is a column vector.

The above uses the identification \(f' = (\nabla f)^T\). For \(f: R^n \rightarrow R\) we have \(df = f(x + dx) - f(x) = f'(x) [dx]\) is a scalar, so if \(dx\) is a column vector, \(f'(x)\) is a row vector with the same number of components (just as \(\nabla f\) is a column vector with the same number of components). The operation \(f'(x)[dx]\) is just matrix multiplication, which is a linear operation.

Example: derivative of a matrix expression

Bright, Edelman, and Johnson (2025) include this example to show that the computation of derivatives using components can be avoided. Consider \(f(x) = x^T A x\) where \(x\) is a vector in \(R^n\) and \(A\) is an \(n\times n\) matrix. This type of expression is common.

Then \(f: R^n \rightarrow R\) and its derivative can be computed:

\[ \begin{align*} df &= f(x + dx) - f(x)\\ &= (x + dx)^T A (x + dx) - x^TAx \\ &= \textcolor{blue}{x^TAx} + dx^TA x + x^TAdx + \textcolor{red}{dx^T A dx} - \textcolor{blue}{x^TAx}\\ &= dx^TA x + x^TAdx \\ &= (dx^TAx)^T + x^TAdx \\ &= x^T A^T dx + x^T A dx\\ &= x^T(A^T + A) dx \end{align*} \]

The term \(dx^t A dx\) is dropped, as it is higher order (goes to zero faster), it containing two \(dx\) terms. In the second to last step, an identity operation (taking the transpose of the scalar quantity) is taken to simplify the algebra. Finally, as \(df = f'(x)[dx]\) the identity of \(f'(x) = x^T(A^T+A)\) is made, or taking transposes \(\nabla f(x) = (A + A^T)x\).

Compare the elegance above with the component version, even though simplified, which still requires a specification of the size to carry the following out:

using SymPy
@syms x[1:3]::real A[1:3, 1:3]::real
u = x' * A * x
grad_u = [diff(u, xi) for xi in x]

\(\left[\begin{smallmatrix}2 A₁_{₁} x₁ + A₁_{₂} x₂ + A₁_{₃} x₃ + A₂_{₁} x₂ + A₃_{₁} x₃\\A₁_{₂} x₁ + A₂_{₁} x₁ + 2 A₂_{₂} x₂ + A₂_{₃} x₃ + A₃_{₂} x₃\\A₁_{₃} x₁ + A₂_{₃} x₂ + A₃_{₁} x₁ + A₃_{₂} x₂ + 2 A₃_{₃} x₃\end{smallmatrix}\right]\)

Compare to the formula for the gradient just derived:

grad_u_1 = (A + A')*x

\(\left[\begin{smallmatrix}2 A₁_{₁} x₁ + x₂ \left(A₁_{₂} + A₂_{₁}\right) + x₃ \left(A₁_{₃} + A₃_{₁}\right)\\2 A₂_{₂} x₂ + x₁ \left(A₁_{₂} + A₂_{₁}\right) + x₃ \left(A₂_{₃} + A₃_{₂}\right)\\2 A₃_{₃} x₃ + x₁ \left(A₁_{₃} + A₃_{₁}\right) + x₂ \left(A₂_{₃} + A₃_{₂}\right)\end{smallmatrix}\right]\)

The two are, of course, equal

all(a == b for (a,b)  zip(grad_u, grad_u_1))
true
Example: derivative of matrix application

For \(f: R^n \rightarrow R^m\), Bright, Edelman, and Johnson (2025) give an example of computing the Jacobian without resorting to component wise computations. Let \(f(x) = Ax\) with \(A\) being a \(m \times n\) matrix, it follows that

\[ \begin{align*} df &= f(x + dx) - f(x)\\ &= A(x + dx) - Ax\\ &= Adx\\ &= f'(x)[dx]. \end{align*} \]

The Jacobian is the linear operator \(A\) acting on \(dx\). (Seeing that \(Adx = f'(x)[dx]\) implies \(f'(x)=A\) comes as this action is true for any \(dx\), hence the actions must be the same.)

60.3 Differentation rules

Various differentiation rules are still available such as the sum, product, and chain rules.

60.3.1 Sum and product rules for the derivative

Using the differential notation—which implicitly ignores higher order terms as they vanish in a limit—the sum and product rules can be derived.

For the sum rule, let \(f(x) = g(x) + h(x)\). Then

\[ \begin{align*} df &= f(x + dx) - f(x) \\ &= f'(x)[dx]\\ &= \left(g(x+dx) + h(x+dx)\right) - \left(g(x) + h(x)\right)\\ &= \left(g(x + dx) - g(x)\right) + \left(h(x + dx) - h(x)\right)\\ &= g'(x)[dx] + h'(x)[dx]\\ &= \left(g'(x) + h'(x)\right)[dx] \end{align*} \]

Comparing we get \(f'(x)[dx] = (g'(x) + h'(x))[dx]\) or \(f'(x) = g'(x) + h'(x)\). (The last two lines above show how the new linear operator \(g'(x) + h'(x)\) is defined on a value, but adding the application for each.

The sum rule has the same derivation as was done with univariate, scalar functions. Similarly for the product rule.

The product rule for \(f(x) = g(x)h(x)\) comes as:

\[ \begin{align*} df &= f(x + dx) - f(x) \\ &= g(x+dx)h(x + dx) - g(x) h(x)\\ &= \left(g(x) + g'(x)[dx]\right)\left(h(x) + h'(x) [dx]\right) - g(x) h(x) \\ &= \textcolor{blue}{g(x)h(x)} + g'(x) [dx] h(x) + g(x) h'(x) [dx] + \textcolor{red}{g'(x)[dx] h'(x) [dx]} - \textcolor{blue}{g(x) h(x)}\\ &= \left(g'(x)[dx]\right)h(x) + g(x)\left(h'(x) [dx]\right)\\ &= dg h + g dh \end{align*} \]

after dropping the higher order term and cancelling \(gh\) terms of opposite signs in the fourth row.

Example

These two rules can be used to directly show the last two examples.

First, if \(f(x) = Ax\) and \(A\) is a constant, then:

\[ df = (dA)x + A(dx) = 0x + A dx = A dx, \]

Next, to differentiate \(f(x) = x^TAx\):

\[ \begin{align*} df &= dx^T (Ax) + x^T d(Ax) \\ &= (dx^T (Ax))^T + x^T A dx \\ &= x^T A^T dx + x^T A dx \\ &= x^T(A^T + A) dx \end{align*} \]

In the second line the transpose of the scalar quantity \(x^TAdx\) it taken to simplify the expression and the first calculation is used.

When \(A^T = A\) (\(A\) is symmetric) this simplifies to a more familiar looking \(2x^TA\), but we see that this requires assumptions not needed in the scalar case.

Example

Bright, Edelman, and Johnson (2025) consider what in Julia is .*. That is the operation:

\[ v .* w = \begin{bmatrix} v_1w_1 \\ v_2w_2 \\ \vdots\\ v_nw_n \end{bmatrix} = \begin{bmatrix} v_1 & 0 & \cdots & 0 \\ 0 & v_2 & \cdots & 0 \\ & & \vdots & \\ 0 & 0 & \cdots & v_n \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \\ \vdots\\ w_n \end{bmatrix} = \text{diag}(v) w. \]

They compute the derivative of \(f(x) = A(x .* x)\) for some fixed matrix \(A\) of the proper size.

We can see by the product rule that \(d (\text{diag}(v)w) = d(\text{diag}(v)) w + \text{diag}(v) dw = (dx) .* w + x .* dw\). So

\(df = A(dx .* x + x .* dx) = 2A(x .* dx)\), as \(.*\) is commutative by its definition. Writing this as \(df = 2A(x .* dx) = 2A(\text{diag}(x) dx) = (2A\text{diag}(x)) dx\), we identify \(f'(x) = 2A\text{diag}(x)\).

This operation is called the Hadamard product and it extends to matrices and arrays.

Numerator layout

The Wikipedia page on matrix calculus has numerous such “identities” for derivatives of different common matrix/vector expressions. As vectors are viewed as column vectors; the “numerator layout” identities apply.

60.3.2 The chain rule

Like the product rule, the chain rule is shown by Bright, Edelman, and Johnson (2025) in this notation with \(f(x) = g(h(x))\):

\[ \begin{align*} df &= f(x + dx) - f(x)\\ &= g(h(x + dx)) - g(h(x))\\ &= g(h(x) + h'(x)[dx]) - g(h(x))\\ &= g(h(x)) + g'(h(x))[h'(x)[dx]] - g(h(x))\\ &= g'(h(x)) [h'(x) [dx]]\\ &= (g'(h(x)) h'(x)) [dx] \end{align*} \]

The operator \(f'(x)= g'(h(x)) h'(x)\) is a product of matrices.

60.3.3 Computational differences with expressions from the chain rule

Of note here is the application of the chain rule to three (or more compositions) where \(c:R^n \rightarrow R^j\), \(b:R^j \rightarrow R^k\), and \(a:R^k \rightarrow R^m\):

If \(f(x) = a(b(c(x)))\) then the derivative is:

\[ f'(x) = a'(b(c(x))) b'(c(x)) c'(x), \]

which can be expressed as three matrix multiplications two ways:

\[ f' = (a'b')c' \text{ or } f' = a'(b'c') \]

Multiplying left to right (the first) is called reverse mode; multiplying right to left (the second) is called forward mode. The distinction becomes important when considering the computational cost of the multiplications.

  • If \(f: R^n \rightarrow R^m\) has \(n\) much bigger than \(1\) and \(m=1\), then it is much faster to do left to right multiplication (many more inputs than outputs)
  • if \(f:R^n \rightarrow R^m\) has \(n=1\) and \(m\) much bigger than one, the it is faster to do right to left multiplication (many outputs than inputs)

The reason comes down to the shape of the matrices. To see, we need to know that matrix multiplication of an \(m \times q\) matrix times a \(q \times n\) matrix takes an order of \(mqn\) operations.

When \(m=1\), the derivative is a product of matrices of size \(n\times j\), \(j\times k\), and \(k \times 1\) yielding a matrix of size \(n \times 1\) matching the function dimension.

The operations involved in multiplication from left to right can be quantified. The first operation takes \(njk\) operation leaving an \(n\times k\) matrix, the next multiplication then takes another \(nk1\) operations or \(njk + nk\) together.

Whereas computing from the right to left is first \(jk1\) operations leaving a \(j \times 1\) matrix. The next operation would take another \(nk1\) operations. In total:

  • left to right is \(njk + nk\) = \(nk \cdot (j + 1)\).
  • right to left is \(jk + jn = j\cdot (k+n)\).

When \(j=k\), say, we can compare and see the second is a factor less in terms of operations. This can be quite significant in higher dimensions.

Example

Using the BenchmarkTools package, we can check the time to compute various products:

using BenchmarkTools
n,j,k,m = 20,15,10,1
@btime A*(B*C) setup=(A=rand(n,j); B=rand(j,k); C=rand(k,m));
@btime (A*B)*C setup=(A=rand(n,j); B=rand(j,k); C=rand(k,m));
  161.511 ns (4 allocations: 432 bytes)
  289.207 ns (4 allocations: 1.88 KiB)

The latter computation is about 1.5 times slower.

Whereas the relationship is changed when the first matrix is skinny and the last is not:

@btime A*(B*C) setup=(A=rand(m,k); B=rand(k,j); C=rand(j,n));
@btime (A*B)*C setup=(A=rand(m,k); B=rand(k,j); C=rand(j,n));
  359.976 ns (4 allocations: 1.88 KiB)
  205.310 ns (4 allocations: 432 bytes)

In calculus, we typically have \(n\) and \(m\) are \(1\), \(2\),or \(3\). But that need not be the case, especially if differentiation is over a parameter space.

60.4 Derivatives of matrix functions

What is the the derivative of \(f(A) = A^2\)?

The function \(f\) takes a \(n\times n\) matrix and returns a matrix of the same size.

This derivative can be derived directly from the product rule:

\[ \begin{align*} df &= d(A^2) = d(AA)\\ &= dA A + A dA \end{align*} \]

That is \(f'(A)\) is the operator \(f'(A)[\delta A] = A \delta A + \delta A A\). (This is not \(2A\delta A\), as \(A\) may not commute with \(\delta A\).)

60.4.1 Vectorization of a matrix

Alternatively, we can identify \(A\) through its components, as a vector in \(R^{n^2}\) and then leverage the Jacobian.

One such identification is vectorization—consecutively stacking the column vectors into a single vector. In Julia the vec function does this operation:

@syms A[1:2, 1:2]
vec(A)

\(\left[\begin{smallmatrix}A₁_{₁}\\A₂_{₁}\\A₁_{₂}\\A₂_{₂}\end{smallmatrix}\right]\)

The stacking by column follows how Julia stores matrices and how Julia references entries in a matrix by linear index:

vec(A) == [A[i] for i in eachindex(A)]
true

With this vectorization operation, \(f\) may be viewed as \(\tilde{f}:R^{n^2} \rightarrow R^{n^2}\) through:

\[ \tilde{f}(\text{vec}(A)) = \text{vec}(f(A)) \]

We use SymPy to compute the Jacobian of this vector valued function.

@syms A[1:3, 1:3]::real
f(x) = x^2
J = vec(f(A)).jacobian(vec(A)) # jacobian of f̃

\(\left[\begin{smallmatrix}2 A₁_{₁} & A₁_{₂} & A₁_{₃} & A₂_{₁} & 0 & 0 & A₃_{₁} & 0 & 0\\A₂_{₁} & A₁_{₁} + A₂_{₂} & A₂_{₃} & 0 & A₂_{₁} & 0 & 0 & A₃_{₁} & 0\\A₃_{₁} & A₃_{₂} & A₁_{₁} + A₃_{₃} & 0 & 0 & A₂_{₁} & 0 & 0 & A₃_{₁}\\A₁_{₂} & 0 & 0 & A₁_{₁} + A₂_{₂} & A₁_{₂} & A₁_{₃} & A₃_{₂} & 0 & 0\\0 & A₁_{₂} & 0 & A₂_{₁} & 2 A₂_{₂} & A₂_{₃} & 0 & A₃_{₂} & 0\\0 & 0 & A₁_{₂} & A₃_{₁} & A₃_{₂} & A₂_{₂} + A₃_{₃} & 0 & 0 & A₃_{₂}\\A₁_{₃} & 0 & 0 & A₂_{₃} & 0 & 0 & A₁_{₁} + A₃_{₃} & A₁_{₂} & A₁_{₃}\\0 & A₁_{₃} & 0 & 0 & A₂_{₃} & 0 & A₂_{₁} & A₂_{₂} + A₃_{₃} & A₂_{₃}\\0 & 0 & A₁_{₃} & 0 & 0 & A₂_{₃} & A₃_{₁} & A₃_{₂} & 2 A₃_{₃}\end{smallmatrix}\right]\)

We do this via linear algebra first, then see a more elegant manner following the notes.

A course in linear algebra shows that any linear operator on a finite vector space can be represented as a matrix. The basic idea is to represent what the operator does to each basis element and put these values as columns of the matrix.

In this \(3 \times 3\) case, the linear operator works on an object with \(9\) slots and returns an object with \(9\) slots, so the matrix will be \(9 \times 9\).

The basis elements are simply the matrices with a \(1\) in spot \((i,j)\) and zero elsewhere. Here we generate them through a function:

basis(i,j,A) = (b=zeros(Int, size(A)...); b[i,j] = 1; b)
JJ = [vec(basis(i,j,A)*A + A*basis(i,j,A)) for  j in 1:3 for i in 1:3]
9-element Vector{Vector{Sym{PyCall.PyObject}}}:
 [2*A₁_₁, A₂_₁, A₃_₁, A₁_₂, 0, 0, A₁_₃, 0, 0]
 [A₁_₂, A₁_₁ + A₂_₂, A₃_₂, 0, A₁_₂, 0, 0, A₁_₃, 0]
 [A₁_₃, A₂_₃, A₁_₁ + A₃_₃, 0, 0, A₁_₂, 0, 0, A₁_₃]
 [A₂_₁, 0, 0, A₁_₁ + A₂_₂, A₂_₁, A₃_₁, A₂_₃, 0, 0]
 [0, A₂_₁, 0, A₁_₂, 2*A₂_₂, A₃_₂, 0, A₂_₃, 0]
 [0, 0, A₂_₁, A₁_₃, A₂_₃, A₂_₂ + A₃_₃, 0, 0, A₂_₃]
 [A₃_₁, 0, 0, A₃_₂, 0, 0, A₁_₁ + A₃_₃, A₂_₁, A₃_₁]
 [0, A₃_₁, 0, 0, A₃_₂, 0, A₁_₂, A₂_₂ + A₃_₃, A₃_₂]
 [0, 0, A₃_₁, 0, 0, A₃_₂, A₁_₃, A₂_₃, 2*A₃_₃]

The elements of JJ show the representation of each of the \(9\) basis elements under the linear transformation.

To construct the matrix representing the linear operator, we need to concatenate these horizontally as column vectors

JJ = hcat(JJ...)

\(\left[\begin{smallmatrix}2 A₁_{₁} & A₁_{₂} & A₁_{₃} & A₂_{₁} & 0 & 0 & A₃_{₁} & 0 & 0\\A₂_{₁} & A₁_{₁} + A₂_{₂} & A₂_{₃} & 0 & A₂_{₁} & 0 & 0 & A₃_{₁} & 0\\A₃_{₁} & A₃_{₂} & A₁_{₁} + A₃_{₃} & 0 & 0 & A₂_{₁} & 0 & 0 & A₃_{₁}\\A₁_{₂} & 0 & 0 & A₁_{₁} + A₂_{₂} & A₁_{₂} & A₁_{₃} & A₃_{₂} & 0 & 0\\0 & A₁_{₂} & 0 & A₂_{₁} & 2 A₂_{₂} & A₂_{₃} & 0 & A₃_{₂} & 0\\0 & 0 & A₁_{₂} & A₃_{₁} & A₃_{₂} & A₂_{₂} + A₃_{₃} & 0 & 0 & A₃_{₂}\\A₁_{₃} & 0 & 0 & A₂_{₃} & 0 & 0 & A₁_{₁} + A₃_{₃} & A₁_{₂} & A₁_{₃}\\0 & A₁_{₃} & 0 & 0 & A₂_{₃} & 0 & A₂_{₁} & A₂_{₂} + A₃_{₃} & A₂_{₃}\\0 & 0 & A₁_{₃} & 0 & 0 & A₂_{₃} & A₃_{₁} & A₃_{₂} & 2 A₃_{₃}\end{smallmatrix}\right]\)

The matrix \(JJ\) is identical to \(J\), above:

all(j == jj for (j, jj) in zip(J, JJ))
true

60.4.2 Kronecker products

But how can we see the Jacobian, \(J\), from the linear operator \(f'(A)[\delta A] = \delta A A + A \delta A\)?

To make this less magical, a related operation to vec is defined.

The \(\text{vec}\) function takes a matrix and stacks its columns.

The \(\text{vec}\) function can turn a matrix into a vector, so it can be used for finding the Jacobian, as above. However the shape of the matrix is lost, as are the fundamental matrix operations, like multiplication.

The Kronecker product replicates values making a bigger matrix. That is, if \(A\) and \(B\) are matrices, the Kronecker product replaces each value in \(A\) with that value times \(B\), making a bigger matrix, as each entry in \(A\) is replaced by an entry with size \(B\).

Formally,

\[ A \otimes B = \begin{bmatrix} a_{11}B & a_{12}B & \cdots & a_{1n}B \\ a_{21}B & a_{22}B & \cdots & a_{2n}B \\ &\vdots & & \\ a_{m1}B & a_{m2}B & \cdots & a_{mn}B \end{bmatrix} \]

The function kron forms this product:

@syms A[1:2, 1:3] B[1:3, 1:4]
kron(A, B) # same as hcat((vcat((A[i,j]*B for i in 1:2)...) for j in 1:3)...)

\(\left[\begin{smallmatrix}A₁_{₁} B₁_{₁} & A₁_{₁} B₁_{₂} & A₁_{₁} B₁_{₃} & A₁_{₁} B₁_{₄} & A₁_{₂} B₁_{₁} & A₁_{₂} B₁_{₂} & A₁_{₂} B₁_{₃} & A₁_{₂} B₁_{₄} & A₁_{₃} B₁_{₁} & A₁_{₃} B₁_{₂} & A₁_{₃} B₁_{₃} & A₁_{₃} B₁_{₄}\\A₁_{₁} B₂_{₁} & A₁_{₁} B₂_{₂} & A₁_{₁} B₂_{₃} & A₁_{₁} B₂_{₄} & A₁_{₂} B₂_{₁} & A₁_{₂} B₂_{₂} & A₁_{₂} B₂_{₃} & A₁_{₂} B₂_{₄} & A₁_{₃} B₂_{₁} & A₁_{₃} B₂_{₂} & A₁_{₃} B₂_{₃} & A₁_{₃} B₂_{₄}\\A₁_{₁} B₃_{₁} & A₁_{₁} B₃_{₂} & A₁_{₁} B₃_{₃} & A₁_{₁} B₃_{₄} & A₁_{₂} B₃_{₁} & A₁_{₂} B₃_{₂} & A₁_{₂} B₃_{₃} & A₁_{₂} B₃_{₄} & A₁_{₃} B₃_{₁} & A₁_{₃} B₃_{₂} & A₁_{₃} B₃_{₃} & A₁_{₃} B₃_{₄}\\A₂_{₁} B₁_{₁} & A₂_{₁} B₁_{₂} & A₂_{₁} B₁_{₃} & A₂_{₁} B₁_{₄} & A₂_{₂} B₁_{₁} & A₂_{₂} B₁_{₂} & A₂_{₂} B₁_{₃} & A₂_{₂} B₁_{₄} & A₂_{₃} B₁_{₁} & A₂_{₃} B₁_{₂} & A₂_{₃} B₁_{₃} & A₂_{₃} B₁_{₄}\\A₂_{₁} B₂_{₁} & A₂_{₁} B₂_{₂} & A₂_{₁} B₂_{₃} & A₂_{₁} B₂_{₄} & A₂_{₂} B₂_{₁} & A₂_{₂} B₂_{₂} & A₂_{₂} B₂_{₃} & A₂_{₂} B₂_{₄} & A₂_{₃} B₂_{₁} & A₂_{₃} B₂_{₂} & A₂_{₃} B₂_{₃} & A₂_{₃} B₂_{₄}\\A₂_{₁} B₃_{₁} & A₂_{₁} B₃_{₂} & A₂_{₁} B₃_{₃} & A₂_{₁} B₃_{₄} & A₂_{₂} B₃_{₁} & A₂_{₂} B₃_{₂} & A₂_{₂} B₃_{₃} & A₂_{₂} B₃_{₄} & A₂_{₃} B₃_{₁} & A₂_{₃} B₃_{₂} & A₂_{₃} B₃_{₃} & A₂_{₃} B₃_{₄}\end{smallmatrix}\right]\)

The \(m\times n\) matrix \(A\) and \(j \times k\) matrix \(B\) has a Kronecker product with size \(mj \times nk\).

The Kronecker product has a certain algebra, including:

  • transposes: \((A \otimes B)^T = A^T \otimes B^T\)
  • orthogonal: \((A\otimes B)^T = (A\otimes B)\) if both \(A\) and \(B\) has the same property
  • trace (sum of diagonal): \(\text{tr}(A \otimes B) = \text{tr}(A)\text{tr}(B)\).* determinants: \(\det(A\otimes B) = \det(A)^m \det(B)^n\), where \(A\) is \(n\times n\), \(B\) is \(m \times m\).
  • inverses: \((A \otimes B)^{-1} = (A^{-1}) \otimes (B^{-1})\)
  • multiplication: \((A\otimes B)(C \otimes D) = (AC) \otimes (BD)\)

The main equation coupling vec and kron is the fact that if \(A\), \(B\), and \(C\) have appropriate sizes, then:

\[ (A \otimes B) \text{vec}(C) = \text{vec}(B C A^T). \]

Appropriate sizes for \(A\), \(B\), and \(C\) are determined by the various products in \(BCA^T\).

If \(A\) is \(m \times n\) and \(B\) is \(r \times s\), then since \(BC\) is defined, \(C\) has \(s\) rows, and since \(CA^T\) is defined, \(C\) must have \(n\) columns, as \(A^T\) is \(n \times m\), so \(C\) must be \(s\times n\). Checking this is correct on the other side, \(A \times B\) would be size \(mr \times ns\) and \(\vec{C}\) would be size \(sn\), so that product works, size wise.

The referred to notes have an explanation for this formula, but we only confirm it with an example using \(m=n=2\) and \(r=s=3\):

@syms A[1:2, 1:2]::real B[1:3, 1:3]::real C[1:3, 1:2]::real
L, R = kron(A,B)*vec(C),  vec(B*C*A')
all(l == r for (l, r)  zip(L, R))
true

Now to use this relationship to recognize \(df = A dA + dA A\) with the Jacobian computed from \(\text{vec}(f(a))\).

We have \(\text{vec}(A dA + dA A) = \text{vec}(A dA) + \text{vec}(dA A)\), by obvious linearity of \(\text{vec}\). Now inserting an identity matrix, \(I\), which is symmteric, in a useful spot we have:

\[ \text{vec}(A dA) = \text{vec}(A dA I^T) = (I \otimes A) \text{vec}(dA), \]

and

\[ \text{vec}(dA A) = \text{vec}(I dA (A^T)^T) = (A^T \otimes I) \text{vec}(dA). \]

This leaves

\[ \text{vec}(A dA + dA A) = \left((I \otimes A) + (A^T \otimes I)\right) \text{vec}(dA) \]

We should then get the Jacobian we computed from the following:

@syms A[1:3, 1:3]::real
using LinearAlgebra: I
J = vec(A^2).jacobian(vec(A))
JJ = kron(I(3), A) + kron(A', I(3))
all(j == jj for (j,jj) in zip(J,JJ))
true

This technique can also be used with other powers, say \(f(A) = A^3\), where the resulting \(df = A^2 dA + A dA A + dA A^2\) is one answer that can be compared to a Jacobian through

\[ \begin{align*} df &= \text{vec}(A^2 dA I^T) + \text{vec}(A dA A) + \text{vec}(I dA A^2)\\ &= (I \otimes A^2)\text{vec}(dA) + (A^T \otimes A) \text{vec}(dA) + ((A^T)^2 \otimes I) \text{vec}(dA) \end{align*} \]

The above shows how to relate the derivative of a matrix function to the Jacobian of a vectorized function, but only for illustration. It is certainly not necessary to express the derivative of \(f\) in terms of the derivative of its vectorized counterpart.

Example: derivative of the matrix inverse

What is the derivative of \(f(A) = A^{-1}\)? The same technique used to find the derivative of the inverse of a univariate, scalar-valued function is useful. Starting with \(I = AA^{-1}\) and noting \(dI\) is \(0\) we have

\[ \begin{align*} 0 &= d(AA^{-1})\\ &= dAA^{-1} + A d(A^{-1}) \end{align*} \]

So, \(d(A^{-1}) = -A^{-1} dA A^{-1}\).

This could be re-expressed as a linear operator through

\[ \text{vec}(dA^{-1}) = \left((A^{-1})^T \otimes A^{-1}\right) \text{vec}(dA) = \left((A^T)^{-1} \otimes A^{-1}\right) \text{vec}(dA). \]

Example: derivative of the matrix determinant

Let \(f(A) = \text{det}(A)\). What is the derivative?

First, the determinant of a square, \(n\times n\), matrix \(A\) is a scalar summary of \(A\). There are different means to compute the determinant, but this recursive one in particular is helpful here:

\[ \text{det}(A) = a_{1j}C_{1j} + a_{2j}C_{2j} + \cdots a_{nj}C_{nj} \]

for any \(j\). The cofactor \(C_{ij}\) is the determinant of the \((n-1)\times(n-1)\) matrix with the \(i\)th row and \(j\)th column deleted times \((-1)^{i+j}\).

To find the gradient of \(f\), we differentiate by each of the \(A_{ij}\) variables, and so

\[ \frac{\partial\text{det}(A)}{\partial A_{ij}} = \frac{\partial (a_{1j}C_{1j} + a_{2j}C_{2j} + \cdots a_{nj}C_{nj})}{\partial A_{ij}} = C_{ij}, \]

as each cofactor in the expansion has no dependence on \(A_{ij}\) as the cofactor removes the \(i\)th row and \(j\)th column.

So the gradient is the matrix of cofactors.

Bright, Edelman, and Johnson (2025) also give a different proof, starting with this observation:

\[ \text{det}(I + dA) - \text{det}(I) = \text{tr}(dA). \]

Assuming that, then by the fact \(\text{det}(AB) = \text{det}(A)\text{det}(B)\):

\[ \begin{align*} \text{det}(A + A(A^{-1}dA)) - \text{det}(A) &= \text{det}(A)\cdot(\text{det}(I+ A^{-1}dA) - \text{det}(I)) \\ &= \text{det}(A) \text{tr}(A^{-1}dA)\\ &= \text{tr}(\text{det}(A)A^{-1}dA). \end{align*} \]

This agrees through a formula to compute the inverse of a matrix through its cofactor matrix divided by its determinant.

That the trace gets involved, can be seen from this computation, which shows the only first-order terms are from the diagonal sum:

using LinearAlgebra
@syms dA[1:2, 1:2]
det(I + dA) - det(I)

\(dA₁_{₁} dA₂_{₂} + dA₁_{₁} - dA₁_{₂} dA₂_{₁} + dA₂_{₂}\)

60.5 The adjoint method

The chain rule brings about a series of products. The adjoint method illustrated by Bright, Edelman, and Johnson (2025) and summarize below, shows how to approach the computation of the series in a direction that minimizes the computational cost, illustrating why reverse mode is preferred to forward mode when a scalar function of several variables is considered.

Bright, Edelman, and Johnson (2025) consider the derivative of

\[ g(p) = f(A(p)^{-1} b) \]

This might arise from applying a scalar-valued \(f\) to the solution of \(Ax = b\), where \(A\) is parameterized by \(p\). The number of parameters might be quite large, so how the resulting computation is organized might effect the computational costs.

The chain rule gives the following computation to find the derivative (or gradient):

\[ \begin{align*} dg &= f'(x)[dx]\\ &= f'(x) [d(A(p)^{-1} b)]\\ &= f'(x)[-A(p)^{-1} dA A(p)^{-1} b + 0]\\ &= -\textcolor{red}{f'(x) A(p)^{-1}} dA\textcolor{blue}{A(p)^{-1}[b]}. \end{align*} \]

By setting \(v^T = f'(x)A(p)^{-1}\) and writing \(x = A(p)^{-1}[b]\) this becomes

\[ dg = -v^T dA x. \]

This product of three terms can be computed in two directions:

From left to right:

First \(v\) is found by solving \(v^T = f'(x) A^{-1}\) through the solving of \(v = (A^{-1})^T (f'(x))^T = (A^T)^{-1} \nabla(f)\) or by solving \(A^T v = \nabla f\). This is called the adjoint equation.

The partial derivatives in \(p\) of \(g\) are related to each partial derivative of \(dA\) through:

\[ \frac{\partial g}{\partial p_k} = -v^T\frac{\partial A}{\partial p_k} x, \]

as the scalar factor commutes through. With \(v\) and \(x\) solved for (via the adjoint equation and from solving \(Ax=b\)) the partials in \(p_k\) are computed with dot products. There are just two costly operations.

From right to left:

The value of \(x\) can be solved for, as above, but computing the value of

\[ \frac{\partial g}{\partial p_k} = -f'(x) \left(A^{-1} \frac{\partial A}{\partial p_k} x \right) \]

requires a costly solve of \(A^{-1}\frac{\partial A}{\partial p_k} x\) for each \(p_k\), and \(p\) may have many components. This is the difference: left to right only has the solve of the one adjoint equation.

As mentioned above, the reverse mode offers advantages when there are many input parameters (\(p\)) and a single output parameter.

Example

Suppose \(x(p)\) solves some system of equations \(h(x(p),p) = 0\) in \(R^n\) (\(n\) possibly just \(1\)) and \(g(p) = f(x(p))\) is some non-linear transformation of \(x\). What is the derivative of \(g\) in \(p\)?

Suppose the implicit function theorem applies to \(h(x,p) = 0\), that is locally the response \(x(p)\) has a derivative, and moreover by the chain rule

\[ 0 = \frac{\partial h}{\partial p} dp + \frac{\partial h}{\partial x} dx. \]

Solving the above for \(dx\) gives:

\[ dx = -\left(\frac{\partial h}{\partial x}\right)^{-1} \frac{\partial h}{\partial p} dp. \]

The chain rule applied to \(g(p) = f(x(p))\) then yields

\[ dg = f'(x) dx = - f'(x) \left(\frac{\partial h}{\partial x}\right)^{-1} \frac{\partial h}{\partial p} dp = -v^T\frac{\partial h}{\partial p} dp, \]

by setting

\[ v^T = f'(x) \left(\frac{\partial h}{\partial x}\right)^{-1}. \]

Here \(v\) can be solved for by taking adjoints (as before). Let \(A = \partial h/\partial x\), then \(v^T = f'(x) A^{-1}\) or \(v = (A^{-1})^T (f'(x))^t= (A^T)^{-1} \nabla f\). That is \(v\) solves \(A^Tv=\nabla f\). As before it would take two solves to get both \(g\) and its gradient.

60.6 Second derivatives, Hessian

We reference a theorem presented by Carlsson et al. (2025) for exposition with some modification

Theorem 1. Let \(f:X \rightarrow Y\), where \(X,Y\) are finite dimensional inner product spaces with elements in \(R\). Suppose \(f\) is smooth (a certain number of derivatives). Then for each \(x\) in \(X\) there exists a unique linear operator, \(f'(x)\), and a unique bilinear symmetric operator \(f'': X \oplus X \rightarrow Y\) such that

\[ f(x + \delta x) = f(x) + f'(x)[\delta x] + \frac{1}{2}f''(x)[\delta x, \delta x] + \mathscr(||\delta x ||^2). \]

New terms include bilinear, symmetric, and inner product. An operator (\(X\oplus X \rightarrow Y\)) is bilinear if it is a linear operator in each of its two arguments. Such an operator is symmetric if interchanging its two arguments makes no difference in its output. Finally, an inner product space is one with a generalization of the dot product. An inner product takes two vectors \(x\) and \(y\) and returns a scalar; it is denoted \(\langle x,y\rangle\); and has properties of symmetry, linearity, and non-negativity (\(\langle x,x\rangle \geq 0\), and equal \(0\) only if \(x\) is the zero vector.) Inner products can be used to form a norm (or length) for a vector through \(||x||^2 = \langle x,x\rangle\).

We reference this, as the values denoted \(f'\) and \(f''\) are unique. So if we identify them one way, we have identified them.

Specializing to \(X=R^n\) and \(Y=R^1\), we have, \(f'=\nabla f^T\) and \(f''\) is the Hessian.

Take \(n=2\). Previously we wrote a formula for Taylor’s theorem for \(f:R^n \rightarrow R\) that with \(n=2\) has with \(x=\langle x_1,x_2\rangle\):

\[ \begin{align*} f(x + dx) &= f(x) + \frac{\partial f}{\partial x_1} dx_1 + \frac{\partial f}{\partial x_2} dx_2\\ &{+} \frac{1}{2}\left( \frac{\partial^2 f}{\partial x_1^2}dx_1^2 + \frac{\partial^2 f}{\partial x_1 \partial x_2}dx_1dx_2 + \frac{\partial^2 f}{\partial x_2^2}dx_2^2 \right) + \mathscr{o}(dx). \end{align*} \]

We can see \(\nabla{f} \cdot dx = f'(x) dx\) to tidy up part of the first line, and more over the second line can be seen to be a matrix product:

\[ [dx_1 dx_2] \begin{bmatrix} \frac{\partial^2 f}{\partial x_1^2} & \frac{\partial^2 f}{\partial x_1 \partial x_2}\\ \frac{\partial^2 f}{\partial x_2 \partial x_1} & \frac{\partial^2 f}{\partial x_2^2} \end{bmatrix} \begin{bmatrix} dx_1\\ dx_2 \end{bmatrix} = dx^T H dx, \]

\(H\) being the Hessian with entries \(H_{ij} = \frac{\partial f}{\partial x_i \partial x_j}\).

This formula—\(f(x+dx)-f(x) \approx f'(x)dx + dx^T H dx\)—is valid for any \(n\), showing \(n=2\) was just for ease of notation when expressing in the coordinates and not as matrices.

By uniqueness, we have under these assumptions that the Hessian is symmetric and the expression \(dx^T H dx\) is a bilinear form, which we can identify as \(f''(x)[dx,dx]\).

That the Hessian is symmetric could also be derived under these assumptions by directly computing that the mixed partials can have their order exchanged. But in this framework, as explained by Bright, Edelman, and Johnson (2025) (and shown later) it is a result of the underlying vector space having an addition that is commutative (e.g. \(u+v = v+u\)).

The mapping \((u,v) \rightarrow u^T A v\) for a matrix \(A\) is bilinear. For a fixed \(u\), it is linear as it can be viewed as \((u^TA)[v]\) and matrix multiplication is linear. Similarly for a fixed \(v\).

Bright, Edelman, and Johnson (2025) extend this characterization to a broader setting.

We have for some function \(f\)

\[ df = f(x + dx) - f(x) = f'(x)[dx] \]

Then if \(d\tilde{x}\) is another differential change with the same shape as \(x\) we can look at the differential of \(f'(x)\):

\[ d(f') = f'(x + d\tilde{x}) - f'(x) = f''(x)[d\tilde{x}] \]

Now, \(d(f')\) has the same shape as \(f'\), a linear operator, hence \(d(f')\) is also a linear operator. Acting on \(dx\), we have

\[ d(f')[dx] = f''(x)[d\tilde{x}][dx] = f''(x)[d\tilde{x}, dx]. \]

The last equality a definition. As \(f''\) is linear in the the application to \(d\tilde{x}\) and also linear in application to \(dx\), \(f''(x)\) is a bilinear operator.

Moreover, the following shows it is symmetric:

\[ \begin{align*} f''(x)[d\tilde{x}][dx] &= (f'(x + d\tilde{x}) - f'(x))[dx]\\ &= f'(x + d\tilde{x})[dx] - f'(x)[dx]\\ &= (f(x + d\tilde{x} + dx) - f(x + d\tilde{x})) - (f(x+dx) - f(x))\\ &= (f(x + dx + d\tilde{x}) - f(x + dx)) - (f(x + d\tilde{x}) - f(x))\\ &= f'(x + dx)[d\tilde{x}] - f'(x)[d\tilde{x}]\\ &= f''(x)[dx][d\tilde{x}] \end{align*} \]

So \(f''(x)[d\tilde{x},dx] = f''(x)[dx, d\tilde{x}]\). The key is the commutivity of vector addition to say \(dx + d\tilde{x} = d\tilde{x} + dx\) in the third line.

Example: Hessian is symmetric

As mentioned earlier, the Hessian is the matrix arising from finding the second derivative of a multivariate, scalar-valued function \(f:R^n \rightarrow R\). As a bilinear form on a finite vector space, it can be written as \(\tilde{x}^T A x\). As this second derivative is symmetric, and this value above a scalar, it follows that \(\tilde{x}^T A x = \tilde{x}^T A^T x\). That is \(H = A\) must also be symmetric from general principles.

Example: second derivative of \(x^TAx\)

Consider an expression from earlier \(f(x) = x^T A x\) for some constant \(A\).

We have seen that \(f' = (\nabla f)^T = x^T(A+A^T)\). That is \(\nabla f = (A^T+A)x\) is linear in \(x\). The Jacobian of \(\nabla f\) is the Hessian, \(H = f'' = A + A^T\).

Example: second derivative of \(\text{det}(A)\)

Consider \(f(A) = \text{det}(A)\). We saw previously that:

\[ \begin{align*} \text{tr}(A + B) &= \text{tr}(A) + \text{tr}(B)\\ \text{det}(A + dA') &= \text{det}(A) + \text{det}(A)\text{tr}(A^{-1}dA')\\ (A + dA') &= A^{-1} - A^{-1} dA' A^{-1} \end{align*} \]

These are all used to simplify:

\[ \begin{align*} \text{det}(A+dA')&\text{tr}((A + dA')^{-1} dA) - \text{det}(A) \text{tr}(A^{-1}dA) \\ &= \left( \text{det}(A) + \text{det}(A)\text{tr}(A^{-1}dA') \right) \text{tr}((A^{-1} - A^{-1}dA' A^{-1})dA)\\ &\quad{-} \text{det}(A) \text{tr}(A^{-1}dA) \\ &= \textcolor{blue}{\text{det}(A) \text{tr}(A^{-1}dA)}\\ &\quad{+} \text{det}(A)\text{tr}(A^{-1}dA')\text{tr}(A^{-1}dA) \\ &\quad{-} \text{det}(A)\text{tr}(A^{-1}dA' A^{-1}dA)\\ &\quad{-} \textcolor{red}{\text{det}(A)\text{tr}(A^{-1}dA')\text{tr}(A^{-1}dA' A^{-1}dA)}\\ &\quad{-} \textcolor{blue}{\text{det}(A) \text{tr}(A^{-1}dA)} \\ &= \text{det}(A)\text{tr}(A^{-1}dA')\text{tr}(A^{-1}dA) - \text{det}(A)\text{tr}(A^{-1}dA' A^{-1}dA)\\ &\quad{+} \textcolor{red}{\text{third order term}} \end{align*} \]

So, after dropping the third-order term, we see:

\[ f''(A)[dA,dA'] = \text{det}(A)\text{tr}(A^{-1}dA')\text{tr}(A^{-1}dA) - \text{det}(A)\text{tr}(A^{-1}dA' A^{-1}dA). \]