Linear algebra

If the thing is twice as big, the transformed version of the thing is also twice as big. {End}



Oh! the hours I put in to studying the taxonomy and husbandry of matrices. Time has passed. I have forgotten much. Jacobians have begun to seem downright Old Testament.

And when you put the various operations of matrix calculus into the mix (derivative of trace of a skew-hermitian heffalump painted with a camel-hair brush) the combinatorial explosions of theorems and identities is intimidating.

Things I need:

Basic linear algebra intros

  • Jacob Ström, Kalle Åström, and Tomas Akenine-Möller, Immersive Math is an “The world’s first linear algebra book with fully interactive figures.”

  • Kevin Brown on Bras, Kets, and Matrices

  • Stanford CS229’s Linear Algebra Review and reference (PDF)

  • Fun: Tom Leinster, There are no non-trivial complex quarter turns but there are real ones, i.e.

    for a linear operator T on a real inner product space,

    \[ \langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T^\ast = -T \]

    whereas for an operator on a complex inner product space,

    \[ \langle T x, x \rangle = 0 \,\, \forall x \,\, \iff \,\, T = 0. \]

    Cool.

  • Sheldon Axler’s Down with Determinants!. (Axler 1995) is a readable and intuitive introduction for undergrads:

    Without using determinants, we will define the multiplicity of an eigenvalue and prove that the number of eigenvalues, counting multiplicities, equals the dimension of the underlying space. Without determinants, we’ll define the characteristic and minimal polynomials and then prove that they behave as expected. Next, we will easily prove that every matrix is similar to a nice upper-triangular one. Turning to inner product spaces, and still without mentioning determinants, we’ll have a simple proof of the finite-dimensional Spectral Theorem.

    Determinants are needed in one place in the undergraduate mathematics curriculum: the change of variables formula for multi-variable integrals. Thus at the end of this paper we’ll revive determinants, but not with any of the usual abstruse definitions. We’ll define the determinant of a matrix to be the product of its eigenvalues (counting multiplicities). This easy-to-remember definition leads to the usual formulas for computing determinants. We’ll derive the change of variables formula for multi-variable integrals in a fashion that makes the appearance of the determinant there seem natural.

    He wrote a whole textbook on this basis, (Axler 2014).

  • a handy glossary is Mike Brooks’ Matrix reference manual

  • Singular Value Decomposition series, for its insight:

    Most of the time when people talk about linear algebra even mathematicians), they’ll stick entirely to the linear map perspective or the data perspective, which is kind of frustrating when you’re learning it for the first time. It seems like the data perspective is just a tidy convenience, that it just“makes sense” to put some data in a table. In my experience the singular value decomposition is the first time that the two perspectives collide, and (at least in my case) it comes with cognitive dissonance.

  • Nigh Higham presents the Moore-Penrose pseudoinverse as a member of a family of pseudoinverses, Updated

Linear algebra and calculus

The multidimensional statistics/control theory workhorse.

See matrix calculus.

Multilinear Algebra

Oooh you are playing with tensors? I don’t have a bunch to say but here is a compact explanation of Einstein summation, which turns out to be as simple as it needs to be, but no simpler.

Fun tricks

John Cook on Sam Walter’s theorem on convex functions of eigenvalues and diagonals.

Fun with determinants

Incoming

References

Axler, Sheldon. 1995. Down with Determinants! The American Mathematical Monthly 102 (2): 139–54.
———. 2014. Linear Algebra Done Right. New York: Springer.
Charlier, Benjamin, Jean Feydy, Joan Alexis Glaunès, François-David Collin, and Ghislain Durif. 2021. Kernel Operations on the GPU, with Autodiff, Without Memory Overflows.” Journal of Machine Learning Research 22 (74): 1–6.
Dwyer, Paul S. 1967. Some Applications of Matrix Derivatives in Multivariate Analysis.” Journal of the American Statistical Association 62 (318): 607.
Gallier, Jean, and Jocelyn Quaintance. 2022. Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning.
Giles, M. 2008. An Extended Collection of Matrix Derivative Results for Forward and Reverse Mode Automatic Differentiation.” Http://Eprints.maths.ox.ac.uk/1079, January.
Giles, Mike B. 2008. Collected Matrix Derivative Results for Forward and Reverse Mode Algorithmic Differentiation.” In Advances in Automatic Differentiation, edited by Christian H. Bischof, H. Martin Bücker, Paul Hovland, Uwe Naumann, and Jean Utke, 64:35–44. Berlin, Heidelberg: Springer Berlin Heidelberg.
Golub, Gene H., and Charles F van Loan. 1983. Matrix Computations. JHU Press.
Golub, Gene H., and Gérard Meurant. 2010. Matrices, Moments and Quadrature with Applications. USA: Princeton University Press.
Graham, Alexander. 1981. Kronecker Products and Matrix Calculus: With Applications. Horwood.
Higham, Nicholas J. 2008. Functions of Matrices: Theory and Computation. Philadelphia: Society for Industrial and Applied Mathematics.
Hoaglin, David C., and Roy E. Welsch. 1978. The Hat Matrix in Regression and ANOVA.” The American Statistician 32 (1): 17–22.
Laue, Soeren, Matthias Mitterreiter, and Joachim Giesen. 2018. Computing Higher Order Derivatives of Matrix and Tensor Expressions.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 2750–59. Curran Associates, Inc.
Magnus, Jan R., and Heinz Neudecker. 2019. Matrix differential calculus with applications in statistics and econometrics. 3rd ed. Wiley series in probability and statistics. Hoboken (N.J.): Wiley.
Mahoney, Michael W. 2010. Randomized Algorithms for Matrices and Data. Vol. 3.
Minka, Thomas P. 2000. Old and new matrix algebra useful for statistics.
Parlett, Beresford N. 2000. The QR Algorithm.” Computing in Science & Engineering 2 (1): 38–42.
Petersen, Kaare Brandt, and Michael Syskind Pedersen. 2012. The Matrix Cookbook.”
Saad, Yousef. 2003. Iterative Methods for Sparse Linear Systems: Second Edition. 2nd ed. SIAM.
Seber, George A. F. 2007. A Matrix Handbook for Statisticians. Wiley.
Simoncini, V. 2016. Computational Methods for Linear Matrix Equations.” SIAM Review 58 (3): 377–441.
Steeb, Willi-Hans. 2006. Problems and Solutions in Introductory and Advanced Matrix Calculus. World Scientific.
Turkington, Darrell A. 2002. Matrix Calculus and Zero-One Matrices: Statistical and Econometric Applications. Cambridge ; New York: Cambridge University Press.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.