Orthonormal and unitary matrices
Energy preserving operators, generalized rotations
October 22, 2019 — September 19, 2023
\[ \renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\mmm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}[1]{\mathbb{I}\{#1\}} \renewcommand{\inner}[2]{\langle #1,#2\rangle} \renewcommand{\Inner}[2]{\left\langle #1,#2\right\rangle} \renewcommand{\norm}[1]{\| #1\|} \renewcommand{\Norm}[1]{\|\langle #1\right\|} \renewcommand{\argmax}{\operatorname{arg max}} \renewcommand{\argmin}{\operatorname{arg min}} \]
In which I think about parameterisations and implementations of finite-dimensional energy-preserving operators, a.k.a. orthonormal matrices. A particular nook in the linear feedback process library, closely related to stability in linear dynamical systems, since every orthonormal matrix is the forward operator of an energy-preserving system, which is an edge case for certain natural types of stability. Also important in random low-dimensional projections.
Uses include maintaining stable gradients in recurrent neural networks (Arjovsky, Shah, and Bengio 2016; Jing et al. 2017; Mhammedi et al. 2017) and efficient invertible normalising flows (van den Berg et al. 2018; Hasenclever, Tomczak, and Welling 2017). Also, parameterising stable Multi-Input-Multi-Output (MIMO) delay networks in signal processing. Probably other stuff too.
Terminology: Some writers refer to orthogonal matrices (but I prefer that to mean matrices where the columns are not necessarily 2-norm 1), and some refer to unitary matrices, which implies that the matrix is over the complex field instead of the reals but is basically the same from my perspective.
We also might want to consider the implied manifolds upon which these objects live, the Stiefel manifold. Formally, the Stiefel manifold \(\mathcal{V}_{k, m}\) is the space of \(k\) frames in the \(m\)-dimensional real Euclidean space \(\mathbb{R}^{m},\) represented by the set of \(m \times k\) matrices \(\mm{M}\) such that \(\mm{M}^{\prime} \mm{M}=\mm{I}_{k},\) where \(\mm{I}_{k}\) is the \(k \times k\) identity matrix. Usually my purposes are served here by \(k=m\). There are some interesting cases in low dimensional projections served by \(k<m,\) including \(k=1.\)
Finding an orthonormal matrix is equivalent to choosing a finite orthonormal basis, so any way we can parameterise such a basis gives us an orthonormal matrix.
NB the normalisation implies that the basis for an \(n\times n\) matrix has at most \(n(n-1)\) free parameters.
TODO: discuss rectangular and square orthogonal matrices.
1 Take the QR decomposition
HT Russell Tsuchida for pointing out that the \(\mm{Q}\) matrix in the QR decomposition, \(\mm{M}=\mm{Q}\mm{R}\) by construction gives me an orthonormal matrix from any square matrix. Likewise with the \(\mm{U},\mm{V}\) matrices in the \(\mm{M}=\mm{U}\Sigma \mm{V}^*\) SVD. This construction is overparameterised, with \(n^2\) free parameters.
The construction of the QR decomposition Householder reflections is, Wikipedia reckons, \(\mathcal{O}(n^3)\) multiplications for an \(n\times n\) matrix.
2 Other decompositions?
We can get something which looks similar via the Lanczos algorithm, which handles warm starts, finding \(\mm{A}=\mm{Q}\mm{T}\mm{Q}^{\top}\) for orthonormal \(\mm{Q}\); although the \(\mm{T}\) matrix is tri-diagonal which is not quite what we want.
Question: do the spectral radius upper-bounds of NO-BEARS (Lee et al. 2019; Zhu et al. 2020), give us a pointer towards another method for finding such matrices? (HT Dario Draca for mentioning this.) I think that gets us something “sub”-orthonormal in general, since it will upper-bound the determinant. Or something.
3 Iterative normalising
Have a nearly orthonormal matrix? van den Berg et al. (2018) gives a contraction which gets us closer to an orthonormal matrix: \[ \mm{Q}^{(k+1)}=\mm{Q}^{(k)}\left(\mm{I}+\frac{1}{2}\left(\mm{I}-\mm{Q}^{(k) \top} \mm{Q}^{(k)}\right)\right). \] This reputedly converges if \(\left\|\mm{Q}^{(0) \top} \mm{Q}^{(0)}-\mm{I}\right\|_{2}<1.\) They attribute this to Björck and Bowie (1971) and Kovarik (1970), wherein it is derived from the Newton iteration for solving \(\mm{Q}^{-1}-\) Here the iterations are clearly \(\mathcal{O}(n^2).\) An \(\mathcal{O}(n)\) option would be nice, but is intuitively not possible. This one is differentiable, however.
4 Perturbing an existing orthonormal matrix
Unitary transforms map unitary matrices to unitary matrices. We can even start from the identity matrix and perturb it to traverse the space of unitary matrices.
4.1 Householder reflections
We can apply successive reflections about hyperplanes, the so-called Householder reflections, to an orthonormal matrix to construct a new one. For a unit vector \(v\) the associated Householder reflection is \[\mm{H}(v)=\mm{I}-2vv^{*}.\] NB \(\det \mm{H}=-1\) so we need to apply an even number of Householder reflections to preserve orthonormality.
4.2 Givens rotation
One obvious method for constructing unitary matrices is composing Givens rotations, which are atomic rotations about 2 axes.
A Givens rotation is represented by a matrix of the form \[{\displaystyle \mm{G}(i,j,\theta )={\begin{bmatrix}1&\cdots &0&\cdots &0&\cdots &0\\\vdots &\ddots &\vdots &&\vdots &&\vdots \\0&\cdots &c&\cdots &-s&\cdots &0\\\vdots &&\vdots &\ddots &\vdots &&\vdots \\0&\cdots &s&\cdots &c&\cdots &0\\\vdots &&\vdots &&\vdots &\ddots &\vdots \\0&\cdots &0&\cdots &0&\cdots &1\end{bmatrix}},} \] where \(c = \cos \theta\) and \(s = \sin \theta\) appear at the intersections of the ith and jth rows and columns. The product \(\mm{G}(i,j,\theta)x\) represents a \(\theta\)-radian counterclockwise rotation of the vector x in the \((i,j)\) plane.
5 Cayley map
The Cayley map maps the skew-symmetric matrices to the orthogonal matrices of positive determinant, and parameterising skew-symmetric matrices is easy; just take the upper triangular component of some matrix and flip/negate it. This still requires a matrix inversion in general, AFAICS.
6 Exponential map
The exponential map (Golinski et al., 2019). Given a skew-symmetric matrix A, i.e. a \(D \times D\) matrix such that \(\mathbf{A}^{\top}=-\mathbf{A}\), the matrix exponential \(\mathbf{Q}=\exp \mathbf{A}\) is always an orthogonal matrix with determinant 1. Moreover, any orthogonal matrix with determinant 1 can be written this way. However, computing the matrix exponential takes in general \(\mathcal{O}\left(D^3\right)\) time, so this parameterisation is only suitable for small-dimensional data.
7 Parametric sub-families
Citing MATLAB, Nick Higham gives the following two parametric families of orthonormal matrices. These are clearly far from covering the whole space of orthonormal matrices.
\[ q_{ij} = \displaystyle\frac{2}{\sqrt{2n+1}}\sin \left(\displaystyle\frac{2ij\pi}{2n+1}\right) \]
\[ q_{ij} = \sqrt{\displaystyle\frac{2}{n}}\cos \left(\displaystyle\frac{(i-1/2)(j-1/2)\pi}{n} \right) \]
8 Structured
Orthogonal convolutions? TBD
9 Random distributions over
I wonder what the distribution of orthonormal decompositions matrices is for some, say, matrix with independent standard Gaussian entries? Nick Higham has the answer, in his compact introduction to random orthonormal matrices. A uniform, rotation-invariant distribution is given by the Haar measure over the group of orthogonal matrices. He also gives the construction for drawing them by random Householder reflections derived from random standard normal vectors. See random rotations.
10 Hurwitz matrix
A related concept. Hurwitz matrices define asymptotically stable systems of ODEs, which is not the same as conserving the energy of a vector. Also they pack the transfer function polynomial in a weird way.