# Approximate matrix factorisations and decompositions

Sometimes even exact

August 5, 2014 — August 23, 2023

### Assumed audience:

People with undergrad linear algebra

## 1 The classics

The *big six* exact matrix decompositions are (Stewart 2000)

- Cholesky decomposition
- pivoted LU decomposition
- QR decomposition
- spectral decomposition
- Schur decomposition; and
- singular value decomposition.

See Nick Higham’s summary of those.

## 2 Approximate decompositions

Mastered QR and LU decompositions? There are now so many ways of factorising matrices that there are not enough acronyms in the alphabet to hold them, especially if we suspect our matrix is sparse, or could be made sparse because of some underlying constraint, or probably could, if squinted at in the right fashion, be such as a graph transition matrix, or Laplacian, or noisy transform of some smooth object, or at least would be close to sparse if we chose the right metric, or…

A big matrix is close to, in some sense, the (tensor/matrix) product (or sum, or…) of some matrices that are in some way simple (small-rank, small dimension, sparse), possibly with additional constraints. Can we find those simple matrices?

Ethan Epperly’s introduction to Low-rank Matrices puts many ideas clearly.

Here’s an example: Godec (T. Zhou and Tao 2011) — A decomposition into low-rank *and* sparse components which loosely speaking, combines multidimensional factorisation and outlier detection.

GoDec is one of the most efficient algorithm for low-rank and sparse decomposition thanks to bilateral random projections (BRP), a fast approximation of SVD/PCA.

There are so many more of these things, depending on our preferred choice of metric, constraints, free variables and such.

Keywords: Matrix sketching, low-rank approximation, traditional dimensionality reduction.

Matrix concentration inequalities turn out to a useful tool to prove that a given matrix decomp is not too bad in a PAC-sense.

Igor Carron’s Matrix Factorization Jungle classifies the following problems as matrix-factorisation type.

- Kernel Factorizations
- …
- Spectral clustering
- \([\mathrm{A} = \mathrm{D} \mathrm{X}]\) with unknown \(\mathrm{D}\) and \(\mathrm{X}\), solve for sparse \(\mathrm{X}\) and \(\mathrm{X}_i = 0\) or \(1\)
- K-Means / K-Median clustering
- \([\mathrm{A} = \mathrm{D} \mathrm{X}]\) with unknown and , solve for \(\mathrm{X} \mathrm{X}^{\top} = \mathrm{I}\) and \(\mathrm{X}_i = 0 or 1\)
- Subspace clustering
- \([\mathrm{A} = \mathrm{A} \mathrm{X}]\) with unknown \(\mathrm{X}\), solve for sparse/other conditions on \(\mathrm{X}\)
- Graph Matching
- \([\mathrm{A} = \mathrm{X} \mathrm{B} \mathrm{X} ^{\top}]\) with unknown \(\mathrm{X}\), \(\mathrm{B}\) solve for \(\mathrm{B}\) and \(\mathrm{X}\) as a permutation
- NMF
- \([\mathrm{A} = \mathrm{D} \mathrm{X}]\) with unknown \(\mathrm{D}\) and \(\mathrm{X}\), solve for elements of \(\mathrm{D}\),\(\mathrm{X}\) positive
- Generalized Matrix Factorization
- \([\mathrm{W}.*\mathrm{L} − \mathrm{W}.*\mathrm{U} \mathrm{V}']\) with \(\mathrm{W}\) a known mask, \(\mathrm{U}\),\(\mathrm{V}\) unknowns solve for \(\mathrm{U}\),\(\mathrm{V}\) and \(\mathrm{L}\) lowest rank possible
- Matrix Completion
- \([\mathrm{A} = \mathrm{H}.*\mathrm{L}]\) with \(\mathrm{H}\) a known mask, \(\mathrm{L}\) unknown solve for \(\mathrm{L}\) lowest rank possible
- Stable Principle Component Pursuit (SPCP)/ Noisy Robust PCA
- \([\mathrm{A} = \mathrm{L} + \mathrm{S} + \mathrm{N}]\) with \(\mathrm{L}\), \(\mathrm{S}\), \(\mathrm{N}\) unknown, solve for \(\mathrm{L}\) low rank, \(\mathrm{S}\) sparse, \(\mathrm{N}\) noise
- Robust PCA
- \([\mathrm{A} = \mathrm{L} + \mathrm{S}]\) with , unknown, solve for \(\mathrm{L}\) low rank, \(\mathrm{S}\) sparse
- Sparse PCA
- \([\mathrm{A} = \mathrm{D} \mathrm{X} ]\) with unknown \(\mathrm{D}\) and \(\mathrm{X}\), solve for sparse \(\mathrm{D}\)
- Dictionary Learning
- \([\mathrm{A} = \mathrm{D} \mathrm{X}]\) with unknown \(\mathrm{D}\) and \(\mathrm{X}\), solve for sparse \(\mathrm{X}\)
- Archetypal Analysis
- \([\mathrm{A} = \mathrm{D} \mathrm{X}]\) with unknown \(\mathrm{D}\) and \(\mathrm{X}\), solve for \(\mathrm{D}= \mathrm{A} \mathrm{B}\) with \(\mathrm{D}\), \(\mathrm{B}\) positive
- Matrix Compressive Sensing (MCS)
- find a rank-r matrix \(\mathrm{L}\) such that \([\mathrm{A}(\mathrm{L}) ~= b]\) / or \([\mathrm{A}(\mathrm{L}+\mathrm{S}) = b]\)
- Multiple Measurement Vector (MMV)
- \([\mathrm{Y} = \mathrm{A} \mathrm{X}]\) with unknown \(\mathrm{X}\) and rows of \(\mathrm{X}\) are sparse
- Compressed sensing
- \([\mathrm{Y} = \mathrm{A} \mathrm{X}]\) with unknown \(\mathrm{X}\) and rows of \(\mathrm{X}\) are sparse, \(\mathrm{X}\) is one column.
- Blind Source Separation (BSS)
- \([\mathrm{Y} = \mathrm{A} \mathrm{X}]\) with unknown \(\mathrm{A}\) and \(\mathrm{X}\) and statistical independence between columns of \(\mathrm{X}\) or subspaces of columns of \(\mathrm{X}\)
- Partial and Online SVD/PCA
- …
- Tensor Decomposition
- Many, many options; see tensor decompositions for some tractable ones.

Truncated Classic PCA is clearly also an example, but is excluded from the list for some reason. Boringness? the fact it’s a special case of Sparse PCA?

I also add

- Square root
- \(\mathrm{Y} = \mathrm{X}^{\top}\mathrm{X}\) for \(\mathrm{Y}\in\mathbb{R}^{N\times N}, X\in\mathbb{R}^{N\times n}\), with (typically) \(n<N\).

That is a whole thing; see matrix square root.

See also learning on manifolds, compressed sensing, optimisation random linear algebra and clustering, penalised regression…

## 3 Tutorials

- Data mining seminar: Matrix sketching
- Kumar and Schneider have a literature survey on low rank approximation of matrices (Kumar and Shneider 2016)
- Preconditioning tutorial by Erica Klarreich
- Andrew McGregor’s ICML Tutorial Streaming, sampling, sketching
- more at signals and graph.
- Another one that makes the link to clustering is Chris Ding’s Principal Component Analysis and Matrix Factorizations for Learning
- Igor Carron’s Advanced Matrix Factorization Jungle.

## 4 Non-negative matrix factorisations

## 5 Why is approximate factorisation ever useful?

For certain types of data matrix, here is a suggestive observation: Udell and Townsend (2019) ask “Why Are Big Data Matrices Approximately Low Rank?”

Matrices of (approximate) low rank are pervasive in data science, appearing in movie preferences, text documents, survey data, medical records, and genomics. While there is a vast literature on how to exploit low rank structure in these datasets, there is less attention paid to explaining why the low rank structure appears in the first place. Here, we explain the effectiveness of low rank models in data science by considering a simple generative model for these matrices: we suppose that each row or column is associated to a (possibly high dimensional) bounded latent variable, and entries of the matrix are generated by applying a piecewise analytic function to these latent variables. These matrices are in general full rank. However, we show that we can approximate every entry of an \(m\times n\) matrix drawn from this model to within a fixed absolute error by a low rank matrix whose rank grows as \(\mathcal{O}(\log(m+n))\). Hence any sufficiently large matrix from such a latent variable model can be approximated, up to a small entrywise error, by a low rank matrix.

Ethan Epperly argues from a function approximation perspective (e.g.) that we can deduce this property from smoothness of functons.

Saul (2023) connects non-negative matrix factorisation to geometric algebra and linear algebra via deep learning and kernels. That sounds like fun.

## 6 As regression

Total Least Squares (a.k.a. orthogonal distance regression, or error-in-variables least-squares linear regression) is a low-rank matrix approximation that minimises the Frobenius divergence from the data matrix. Who knew?

Various other dimensionality reduction techniques can be put in a regression framing, notable Exponential-family PCA.

## 7 Sketching

“Sketching” is a common term to describe a certain type of low-rank factorisation, although I am not sure which types. 🏗

(Martinsson 2016) mentions CUR and interpolative decompositions. What is that now?

## 8 Randomization

Rather than find an optimal solution, why not just choose a random one which might be good enough? There are indeed randomised versions, and many algorithms are implemented using randomness and in particular low-dimensional projections.

## 9 Connections to kernel learning

See (Grosse et al. 2012) for a mind-melting compositional matrix factorization diagram, constructing a search over hierarchical kernel decompositions that also turn out to have some matrix factorisation interpretations.

## 10 Bayesian

Nakajima and Sugiyama (2012):

Mnih and Salakhutdinov (2008) proposed a Bayesian maximum a posteriori (MAP) method based on the Gaussian noise model and Gaussian priors on the decomposed matrices. This method actually corresponds to minimizing the squared-loss with the trace-norm penalty (Srebro, Rennie, and Jaakkola 2004) Recently, the variational Bayesian (VB) approach (Attias 1999) has been applied to MF (Lim and Teh 2007; Raiko, Ilin, and Karhunen 2007), which we refer to as VBMF. The VBMF method was shown to perform very well in experiments. However, its good performance was not completely understood beyond its experimental success.

☜ Insert further developments here. Possibly Brouwer’s thesis (Brouwer 2017) or Shakir Mohamed’s (Mohamed 2011) would be a good start, or Benjamin Drave’s tutorial, Probabilistic Matrix Factorization and Xinghao Ding, Lihan He, and Carin (2011).

I am currently sitting in a seminar by He Zhao on Bayesian matrix factorisation, wherein he is building up this tool for discrete data, which is an interesting case. He starts from M. Zhou et al. (2012) and builds up to Zhao et al. (2018), introducing some hierarchical descriptions along the way. His methods seem to be sampling-based rather than variational (?).

Generalized² Linear² models (Gordon 2002) unify nonlinear matrix factorisations with Generalized Linear Models. I had not heard of that until recently; I wonder how common it is?

## 11 Lanczos decomposition

Lanczos decomposition is handy approximation for matrices which are cheap to multiply because of some structure, but expensive to store. It can also be used to calculate an approximate inverse cheaply.

I learnt this trick from Gaussian process literature in the context of Lanczos Variance Estimates (LOVE) (Pleiss et al. 2018), although I believe it exists elsewhere.

Given some rank \(k\) and an arbitrary starting vector \(\boldsymbol{b}\), the Lanczos algorithm iteratively approximates \(\mathrm{K} \in\mathbb{R}^{n \times n}\) by a low rank factorisation \(\mathrm{K}\approx \mathrm{Q} \mathrm{T} \mathrm{Q}^{\top}\), where \(\mathrm{T} \in \mathbb{R}^{k \times k}\) is tridiagonal and \(\mathrm{Q} \in \mathbb{R}^{n \times k}\) has orthogonal columns. Crucially, we do not need to form \(\mathrm{K}\) to evaluate matrix vector products \(\mathrm{K}\boldsymbol{b}\) for arbitrary vector \(\boldsymbol{b}\). Moreover, with a given Lanczos approximand \(\mathrm{Q},\mathrm{T}\) we may estimate \[\begin{align*} \mathrm{K}^{-1}\boldsymbol{c}\approx \mathrm{Q}\mathrm{T}^{-1}\mathrm{Q}^{\top}\boldsymbol{c}. \end{align*}\] even for \(\boldsymbol{b}\neq\boldsymbol{c}\). Say we wish to calculate \(\left(\mathrm{Z} \mathrm{Z}^{\top}+\sigma^2 \mathrm{I}\right)^{-1}\mathrm{B}\), with \(\mathrm{Z}\in\mathbb{R}^{D\times N }\) and \(N\ll D\).

We approximate the solution to this linear system using the partial Lanczos decomposition starting with probe vector \(\boldsymbol{b}=\overline{\mathrm{B}}\) and \(\mathrm{K}=\left(\mathrm{Z} \mathrm{Z}^{\top}+\sigma^2 \mathrm{I}\right)\).

This requires \(k\) matrix vector products of the form \[\begin{align*} \underbrace{\left(\underbrace{\mathrm{Z} \mathrm{Z}^{\top}}_{\mathcal{O}(ND^2)}+\sigma^2 \mathrm{I}\right)\boldsymbol{b}}_{\mathcal{O}(D^2)} =\underbrace{\mathrm{Z} \underbrace{(\mathrm{Z}^{\top}\boldsymbol{b})}_{\mathcal{O}(ND)}}_{\mathcal{O}(ND)} +\sigma^2 \boldsymbol{b}. \end{align*}\] Using the latter representation, the required matrix-vector product may be found with a time complexity cost of \(\mathcal{O}(ND)\). Space complexity is also \(\mathcal{O}(ND)\). The output of the Lanczos decomposition is \(\mathrm{Q},\mathrm{T}\) such that \(\left(\mathrm{Z}\mathrm{Z}^{\top} +\sigma^2 \mathrm{I}\right)\boldsymbol{b}\approx \mathrm{Q} \mathrm{T} \mathrm{Q}^{\top}\boldsymbol{b}\). Then the solution to the inverse-matrix-vector product may be approximated by \(\left(\mathrm{Z} \mathrm{Z}^{\top} +\sigma^2 \mathrm{I}\right)^{-1} \mathrm{B}\approx \mathrm{Q}\mathrm{T}^{-1}\mathrm{Q}^{\top}\mathrm{B}\). requiring the solution in \(\mathrm{X}\) of the much smaller linear system \(\mathrm{X}\mathrm{T}=\mathrm{Q}\). Exploiting the positive-definiteness of \(\mathrm{T}\) we may use the Cholesky decomposition of \(\mathrm{T}=\mathrm{L}^{\top}\mathrm{L}\) for a constant speedup over solving an arbitrary linear system. The time cost of the solution is \(\mathcal{O}(Dk^3)\), for an overall cost to the matrix inversions of \(\mathcal{O}(NDk+Dk^3)\).

Lifehack: Find derivatives of Lanczos iterations via pnkraemer/matfree: Matrix-free linear algebra in JAX. (Krämer et al. 2024).

## 12 Estimating rank

Eigencount and Numerical Rank

If \(f: \lambda \mapsto \mathbf{1}_{[a, b]}(\lambda)\) is the indicator (step) function in the interval \([a, b]\), then \(\operatorname{trace}(\mathbf{1}(\mathbf{A}))\) estimates the number of non-zero eigenvalues of \(\mathbf{A}\) in that interval, which is an inexpensive method to estimate the rank of a large matrix. Eigencount is closely related to the Principal Component Analysis (PCA) and lowrank approximations in machine learning.

## 13 Incremental decompositions

### 13.1 SVD

See incremental SVD.

### 13.2 Cholesky

## 14 Low rank plus diagonal

Specifically \((\mathrm{K}=\mathrm{Z} \mathrm{Z}^{\top}+\sigma^2\mathrm{I})\) where \(\mathrm{K}\in\mathbb{R}^{N\times N}\) and \(\mathrm{Z}\in\mathbb{R}^{N\times D}\) with \(D\ll N\). A workhorse.

Lots of fun tricks.

## 15 Misc

- Nick Higham, What Is a Rank-Revealing Factorization?

## 16 As an optimisation problem

There are some generalised optimisation problems which look useful for matrix factorisation, e.g. Bhardwaj, Klep, and Magron (2021):

Polynomial optimization problems (POP) are prevalent in many areas of modern science and engineering. The goal of POP is to minimize a given polynomial over a set defined by finitely many polynomial inequalities, a semialgebraic set. This problem is well known to be NP-hard, and has motivated research for more practical methods to obtain approximate solutions with high accuracy.[…]

One can naturally extend the ideas of positivity and sums of squares to the noncommutative (nc) setting by replacing the commutative variables \(z_1, \dots , z_n\) with noncommuting letters \(x_1, \dots , x_n\). The extension to the noncommutative setting is an inevitable consequence of the many areas of science which regularly optimize functions with noncommuting variables, such as matrices or operators. For instance in control theory, matrix completion, quantum information theory, or quantum chemistry

Matrix calculus can help cometimes.

## 17 \([\mathcal{H}]\)-matrix methods

It seems like low-rank matrix factorisation could related to \([\mathcal{H}]\)-matrix methods, but I do not know enough to say more.

See hmatrix.org for one lab’s backgrounder and their implementation, h2lib, hlibpro for a black-box closed-source one.

## 18 Tools

In pytorch, various operations are made easier with cornellius-gp/linear_operator.

Ameli’s tools:

NMF Toolbox (MATLAB and Python):

Nonnegative matrix factorization (NMF) is a family of methods widely used for information retrieval across domains including text, images, and audio. Within music processing, NMF has been used for tasks such as transcription, source separation, and structure analysis. Prior work has shown that initialization and constrained update rules can drastically improve the chances of NMF converging to a musically meaningful solution. Along these lines we present the NMF toolbox, containing MATLAB and Python implementations of conceptually distinct NMF variants—in particular, this paper gives an overview for two algorithms. The first variant, called nonnegative matrix factor deconvolution (NMFD), extends the original NMF algorithm to the convolutive case, enforcing the temporal order of spectral templates. The second variant, called diagonal NMF, supports the development of sparse diagonal structures in the activation matrix. Our toolbox contains several demo applications and code examples to illustrate its potential and functionality. By providing MATLAB and Python code on a documentation website under a GNU-GPL license, as well as including illustrative examples, our aim is to foster research and education in the field of music processing.

Vowpal Wabbit factors matrices, e.g for recommender systems. It seems the `--qr`

version is more favoured.

HPC for matlab, R, python, c++: libpmf:

LIBPMF implements the CCD++ algorithm, which aims to solve large-scale matrix factorization problems such as the low-rank factorization problems for recommender systems.

Spams (C++/MATLAB/python) includes some matrix factorisations in its sparse approx toolbox. (see optimisation)

`scikit-learn`

(python) does a few matrix factorisation in its inimitable batteries-in-the-kitchen-sink way.

… is a Python library for nonnegative matrix factorization. It includes implementations of several factorization methods, initialization approaches, and quality scoring. Both dense and sparse matrix representation are supported.”

Tapkee (C++). Pro-tip — even without coding C++, tapkee does a long list of dimensionality reduction from the CLI.

- PCA and randomized PCA
- Kernel PCA (kPCA)
- Random projection
- Factor analysis

tensorly supports some interesting tensor decompositions.

## 19 References

*Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion*. AM’18.

*Journal of Computer and System Sciences*, Special Issue on PODS 2001,.

*arXiv:1611.05162 [Cs, Stat]*.

*Applied Mathematics and Computation*.

*Neural Computation*.

*arXiv:1212.4777 [Cs, Stat]*.

*Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence*. UAI’99.

*IEEE Transactions on Signal Processing*.

*COLT*.

*arXiv:1309.3117 [Cs, Math]*.

*International Journal for Numerical Methods in Engineering*.

*Optimization With Sparsity-Inducing Penalties*. Foundations and Trends(r) in Machine Learning 1.0.

*Journal of Machine Learning Research*.

*arXiv:1709.10368 [Cond-Mat, Physics:math-Ph]*.

*arXiv:0808.0163 [Cs]*.

*arXiv:1512.07548 [Stat]*.

*Computational Statistics & Data Analysis*.

*IEEE Transactions on Audio, Speech, and Language Processing*.

*Computer Vision — ECCV 2002*.

*Linear Algebra and Its Applications*, Special Issue on Large Scale Linear and Nonlinear Eigenvalue Problems,.

*3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008*.

*IEEE Transactions on Information Theory*.

*Proceedings of the 20th International Conference on Digital Audio Effects*.

*Numerische Mathematik*.

*IEEE Transactions on Audio, Speech, and Language Processing*.

*Proceedings of the 20th International Joint Conference on Artifical Intelligence*. IJCAI’07.

*IEEE Journal of Selected Topics in Signal Processing*.

*IEEE Signal Processing Magazine*.

*IEEE Transactions on Signal Processing*.

*arXiv:1609.00893 [Cs]*.

*2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings*.

*Communications on Pure and Applied Mathematics*.

*Inverse Problems*.

*arXiv:1303.6544 [Cs, Math]*.

*Random Structures & Algorithms*.

*Journal of Mathematical Analysis and Applications*.

*Advances In Neural Information Processing Systems*.

*IEEE Transactions on Knowledge and Data Engineering*.

*PLoS Comput Biol*.

*Proceedings of the 2005 SIAM International Conference on Data Mining*. Proceedings.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.

*arXiv:1706.08701 [Cs, Math]*.

*Proceedings of ISMIR*.

*SIAM Journal on Computing*.

*SIAM Journal on Computing*.

*Journal of Machine Learning Research*.

*Bioinformatics*.

*Institute of Mathematical Statistics Lecture Notes - Monograph Series*.

*Multivariate statistics: a vector space approach*. Lecture notes-monograph series / Institute of Mathematical Statistics 53.

*Linear Algebra and Its Applications*.

*Parallel Computing*, Graph analysis for scientific discovery,.

*SIAM Journal on Matrix Analysis and Applications*.

*Neural Computation*.

*New Journal of Physics*.

*Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing*. STOC ’11.

*Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*. KDD ’11.

*arXiv:1501.01711 [Cs]*.

*Proceedings of the 15th International Conference on Neural Information Processing Systems*. NIPS’02.

*IEEE Transactions on Information Theory*.

*Proceedings of the Conference on Uncertainty in Artificial Intelligence*.

*Physical Review Letters*.

*IEEE Transactions on Signal Processing*.

*IEEE Transactions on Neural Networks and Learning Systems*.

*SIAM Journal on Matrix Analysis and Applications*.

*Hierarchical Matrices: Algorithms and Analysis*. Springer Series in Computational Mathematics 49.

*SIAM Journal on Scientific Computing*.

*Applied Numerical Mathematics*, Third Chilean Workshop on Numerical Analysis of Partial Differential Equations (WONAPDE 2010),.

*The Annals of Statistics*.

*Journal of the American Statistical Association*.

*Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing*. STOC ’12.

*Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms*. Proceedings.

*Journal of Machine Learning Research*.

*Linear Algebra and Its Applications*.

*SIAM Review*.

*Advances in Neural Information Processing Systems*.

*International Conference on Machine Learning*.

*Proceedings of the 2002 12th IEEE Workshop on Neural Networks for Signal Processing, 2002*.

*Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*. KDD ’11.

*2013 European Conference on Mobile Robots (ECMR)*.

*2014 48th Asilomar Conference on Signals, Systems and Computers*.

*Advances in Neural Information Processing Systems*.

*PLOS ONE*.

*IEEE Transactions on Signal Processing*.

*Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming*. PPoPP ’16.

*2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*.

*Lincoln Laboratory Journal*.

*Computing*.

*SIAM Journal on Matrix Analysis and Applications*.

*Computer*.

*Communications of the ACM*.

*Psychometrika*.

*arXiv:1606.06511 [Cs, Math]*.

*arXiv:1607.04331 [Cs, q-Bio, Stat]*.

*Proceedings of the 26th Annual International Conference on Machine Learning*. ICML ’09.

*Nature*.

*Advances in Neural Information Processing Systems 13*.

*Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*. KDD ’13.

*Proceedings of the National Academy of Sciences*.

*Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001*.

*Proceedings of KDD Cup and Workshop*.

*Neural Computation*.

*Linear and Multilinear Algebra*.

*IEEE Transactions on Neural Networks and Learning Systems*.

*arXiv:1601.00238 [Cs, Stat]*.

*Randomized Algorithms for Matrices and Data*.

*Signal Processing*, Advances in Multirate Filter Bank Structures and Multiscale Representations,.

*Proceedings of the 26th Annual International Conference on Machine Learning*. ICML ’09.

*The Journal of Machine Learning Research*.

*Sparse Modeling for Image and Vision Processing*.

*arXiv:1607.01649 [Math]*.

*arXiv:1701.05363 [Math, q-Bio, Stat]*.

*Old and new matrix algebra useful for statistics*.

*Advances in Neural Information Processing Systems*.

*Journal of Machine Learning Research*.

*Foundations of Computational Mathematics*.

*Mathematical Geosciences*.

*arXiv:1511.09433 [Cs, Math, Stat]*.

*Environmetrics*.

*PLoS ONE*.

*Advances in Neural Information Processing Systems*.

*PPSC*.

*Machine Learning: ECML 2007*.

*SIAM J. Matrix Anal. Appl.*

*Proceedings of the National Academy of Sciences*.

*IEEE Transactions on Information Theory*.

*Foundations and Trends® in Theoretical Computer Science*.

*Proceedings of the 25th International Conference on Machine Learning*. ICML ’08.

*Transactions on Machine Learning Research*.

*2007 IEEE Workshop on Machine Learning for Signal Processing*.

*Low Rank Updates for the Cholesky Decomposition*.

*Proceedings of the National Academy of Sciences*.

*Entropy*.

*Machine Learning and Knowledge Discovery in Databases*.

*Independent Component Analysis and Blind Signal Separation*. Lecture Notes in Computer Science.

*arXiv:1701.01207 [Cs, Math, Stat]*.

*arXiv:1403.2877 [Cs, q-Bio, Stat]*.

*SIAM Journal on Computing*.

*Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing*. STOC ’04.

*arXiv:cs/0607105*.

*arXiv:0808.4134 [Cs]*.

*arXiv:0809.3232 [Cs]*.

*Advances in Neural Information Processing Systems 18*.

*Advances in Neural Information Processing Systems*. NIPS’04.

*Computing in Science Engineering*.

*Journal of Computational and Graphical Statistics*.

*arXiv:1609.00048 [Cs, Math, Stat]*.

*SIAM Journal on Matrix Analysis and Applications*.

*Proceedings of the IEEE*.

*arXiv:1507.03194 [Cs, Stat]*.

*IEEE Transactions on Signal Processing*.

*SIAM Journal on Mathematics of Data Science*.

*2008 IEEE International Conference on Acoustics, Speech and Signal Processing*.

*IEEE Transactions on Audio, Speech, and Language Processing*.

*Foundations and Trends® in Theoretical Computer Science*.

*arXiv:1606.08350 [Stat]*.

*2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*.

*arXiv:1702.04837 [Cs, Stat]*.

*PRoceedings of IJCAI, 2017*.

*In Proc. Asian Conf. On Comp. Vision*.

*IEEE Transactions on Knowledge and Data Engineering*.

*arXiv:1901.11436 [Cs, Eess, Stat]*.

*Sketching as a Tool for Numerical Linear Algebra*. Foundations and Trends in Theoretical Computer Science 1.0.

*Applied and Computational Harmonic Analysis*.

*High-dimensional data analysis with low-dimensional models: Principles, computation, and applications*.

*IEEE Transactions on Image Processing*.

*IEEE Transactions on Signal Processing*.

*arXiv:1502.03032 [Cs, Math, Stat]*.

*Journal of Machine Learning Research*.

*Foundations of Computational Mathematics*.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.

*IEEE International Conference of Data Mining*.

*Knowledge and Information Systems*.

*arXiv:1701.02324 [Cs]*.

*SIAM Journal on Mathematics of Data Science*.

*Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 - Volume 01*. ICCV ’05.

*Seventh IEEE International Conference on Data Mining, 2007. ICDM 2007*.

*Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*. KDD ’17.

*arXiv:1701.00481 [Stat]*.

*Proceedings of the 35th International Conference on Machine Learning*.

*Proceedings of the 22nd International Conference on Neural Information Processing Systems*. NIPS’09.

*Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics*.

*Proceedings of the 28th International Conference on International Conference on Machine Learning*. ICML’11.

*Journal of Machine Learning Research*.

*arXiv:1808.01743 [Cs, q-Bio, Stat]*.