# (Approximate) matrix factorisation

Forget QR and LU decompositions, there are now so many ways of factorising matrices that there are not enough acronyms in the alphabet to hold them, especially if you suspect your matrix is sparse, or could be made sparse because of some underlying constraint, or probably could, if squinted at in the right fashion, be such as a graph transition matrix, or Laplacian, or noisy transform of some smooth object, or at least would be close to sparse if you chose the right metric, or…

Your big matrix is close to, in some sense, the (tensor/matrix) product (or sum, or…) of some matrices that are in some way simple (small-rank, small dimension, sparse), possibly with additional constraints. Can you find these simple matrices?

Here’s an example: Godec — A decomposition into low-rank and sparse components which loosely speaking, combines multidimensional factorisation and outlier detection.

There are so many more of these things, depending on your preferred choice of loss function, free variables and such.

Keywords: Matrix sketching, low-rank approximation, traditional dimensionality reduction.

Matrix concentration inequalities turn out to be useful in making this work.

• sparse or low-rank matrix approximation as clustering for density estimation, which is how I imagine high-dimensional mixture models would need to work, and thereby also
• Mercer kernel approximation.
• Connection to manifold learning is also probably worth examining.

Igor Carron’s Matrix Factorization Jungle classifies the following problems as matrix-factorisation type.

Kernel Factorizations
Spectral clustering
$$[A = DX]$$ with unknown D and X, solve for sparse X and X_i = 0 or 1
K-Means / K-Median clustering
$$[A = DX]$$ with unknown D and X, solve for XX^T = I and X_i = 0 or 1
Subspace clustering
$$[A = AX]$$ with unknown X, solve for sparse/other conditions on X
Graph Matching
$$[A = XBX^T]$$ with unknown X, B solve for B and X as a permutation
NMF
$$[A = DX]$$ with unknown D and X, solve for elements of D,X positive
Generalized Matrix Factorization
$$[W.*L − W.*UV']$$ with W a known mask, U,V unknowns solve for U,V and L lowest rank possible
Matrix Completion
$$[A = H.*L]$$ with H a known mask, L unknown solve for L lowest rank possible
Stable Principle Component Pursuit (SPCP)/ Noisy Robust PCA
$$[A = L + S + N]$$ with L, S, N unknown, solve for L low rank, S sparse, N noise
Robust PCA
$$[A = L + S]$$ with L, S unknown, solve for L low rank, S sparse
Sparse PCA
$$[A = DX]$$ with unknown D and X, solve for sparse D
Dictionary Learning
$$[A = DX]$$ with unknown D and X, solve for sparse X
Archetypal Analysis
$$[A = DX]$$ with unknown D and X, solve for D = AB with D, B positive
Matrix Compressive Sensing (MCS)
find a rank-r matrix L such that $$[A(L) ~= b]$$ / or $$[A(L+S) = b]$$
Multiple Measurement Vector (MMV)
$$[Y = A X]$$ with unknown X and rows of X are sparse
compressed sensing
$$[Y = A X]$$ with unknown X and rows of X are sparse, X is one column.
Blind Source Separation (BSS)
$$[Y = A X]$$ with unknown A and X and statistical independence between columns of X or subspaces of columns of X
Partial and Online SVD/PCA
Tensor Decomposition

Truncated Classic PCA is clearly also an example of this, but is excluded from the list for some reason. Boringness? the fact it’s a special case of Sparse PCA?

## Why does it ever work

For certain types of data matrix, here is a possibly plausible explanation:

Udell and Townsend (2019) ask “Why Are Big Data Matrices Approximately Low Rank?”

Matrices of (approximate) low rank are pervasive in data science, appearing in movie preferences, text documents, survey data, medical records, and genomics. While there is a vast literature on how to exploit low rank structure in these datasets, there is less attention paid to explaining why the low rank structure appears in the first place. Here, we explain the effectiveness of low rank models in data science by considering a simple generative model for these matrices: we suppose that each row or column is associated to a (possibly high dimensional) bounded latent variable, and entries of the matrix are generated by applying a piecewise analytic function to these latent variables. These matrices are in general full rank. However, we show that we can approximate every entry of an $$m\times n$$ matrix drawn from this model to within a fixed absolute error by a low rank matrix whose rank grows as $$\mathcal{O}(\log(m+n))$$. Hence any sufficiently large matrix from such a latent variable model can be approximated, up to a small entrywise error, by a low rank matrix.

## As regression

Total Least Squares (a.k.a. orthogonal distance regression, or error-in-variables least-squares linear regression) is a low-rank matrix approximation that minimises the Frobenius divergence from the data matrix. Who knew?

## Sketching

“Sketching” is a common term to describe a certain type of low-rank factorisation, although I am not sure which types. 🏗

mentions CUR and interpolative decompositions. Does preconditioning fit ?

## $$[\mathcal{H}]$$-matrix methods

It seems like low-rank matrix factorisation could related to $$[\mathcal{H}]$$-matrix methods, as seen in, e.g. covariance matrices, but I do not know enough to say more.

See hmatrix.org for one lab’s backgrounder and their implementation, h2lib, hlibpro for a black-box closed-source one.

## Randomized methods

Rather than find an optimal solution, why not just choose a random one which might be good enough? There are indeed randomised versions.

## Connections to kernel learning

See for a mind-melting compositional matrix factorization diagram, constructing a search over hierarchical kernel decompositions with some matrix factorisation interpretation.

Exploiting compositionality to explore a large space of model structures

## Implementations

“Enough theory! Plug the hip new toy into my algorithm!”

OK.

NMF Toolbox (MATLAB and Python)

Nonnegative matrix factorization (NMF) is a family of methods widely used for information retrieval across domains including text, images, and audio. Within music processing, NMF has been used for tasks such as transcription, source separation, and structure analysis. Prior work has shown that initialization and constrained update rules can drastically improve the chances of NMF converging to a musically meaningful solution. Along these lines we present the NMF toolbox, containing MATLAB and Python implementations of conceptually distinct NMF variants—in particular, this paper gives an overview for two algorithms. The first variant, called nonnegative matrix factor deconvolution (NMFD), extends the original NMF algorithm to the convolutive case, enforcing the temporal order of spectral templates. The second variant, called diagonal NMF, supports the development of sparse diagonal structures in the activation matrix. Our toolbox contains several demo applications and code examples to illustrate its potential and functionality. By providing MATLAB and Python code on a documentation website under a GNU-GPL license, as well as including illustrative examples, our aim is to foster research and education in the field of music processing.

Vowpal Wabbit does this, e.g for recommender systems. It seems the --qr version is more favoured.

HPC for matlab, R, python, c++: libpmf:

LIBPMF implements the CCD++ algorithm, which aims to solve large-scale matrix factorization problems such as the low-rank factorization problems for recommender systems.

NMF (R) 🏗

Matlab: Chih-Jen Lin’s nmf.m - “This tool solves NMF by alternative non-negative least squares using projected gradients. It converges faster than the popular multiplicative update approach. ”

In this repository, we offer both MPI and OPENMP implementation for MU, HALS and ANLS/BPP based NMF algorithms. This can run off the shelf as well easy to integrate in other source code. These are very highly tuned NMF algorithms to work on super computers. We have tested this software in NERSC as well OLCF cluster. The openmp implementation is tested on many different Linux variants with intel processors. The library works well for both sparse and dense matrix.

Spams (C++/MATLAB/python) includes some matrix factorisations in its sparse approx toolbox. (see optimisation)

scikit-learn (python) does a few matrix factorisation in its inimitable batteries-in-the-kitchen-sink way.

nimfa is a Python library for nonnegative matrix factorization. It includes implementations of several factorization methods, initialization approaches, and quality scoring. Both dense and sparse matrix representation are supported.”

Tapkee (C++). Pro-tip — even without coding C++, tapkee does a long list of dimensionality reduction from the CLI.

• PCA and randomized PCA
• Kernel PCA (kPCA)
• Random projection
• Factor analysis

tensorly supports some interesting tesnor decompositions.

## References

Aarabi, Hadrien Foroughmand, and Geoffroy Peeters. 2018. In Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, 27:1–7. AM’18. New York, NY, USA: ACM.
Abdallah, Samer A., and Mark D. Plumbley. 2004. In.
Achlioptas, Dimitris. 2003. Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87.
Aghasi, Alireza, Nam Nguyen, and Justin Romberg. 2016. arXiv:1611.05162 [Cs, Stat], November.
Ang, Andersen Man Shun, and Nicolas Gillis. 2018. Neural Computation 31 (2): 417–39.
Arora, Sanjeev, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. 2012. arXiv:1212.4777 [Cs, Stat], December.
Bach, Francis. 2013. arXiv:1309.3117 [Cs, Math], September.
Bach, Francis R. 2013. In COLT, 30:185–209.
Bach, Francis R, and Michael I Jordan. 2002. “Kernel Independent Component Analysis.” Journal of Machine Learning Research 3 (July): 48.
Bach, Francis, Rodolphe Jenatton, and Julien Mairal. 2011. Optimization With Sparsity-Inducing Penalties. Foundations and Trends(r) in Machine Learning 1.0. Now Publishers Inc.
Bagge Carlson, Fredrik. 2018. Thesis/docmono, Lund University.
Barbier, Jean, Nicolas Macris, and Léo Miolane. 2017. arXiv:1709.10368 [Cond-Mat, Physics:math-Ph], September.
Batson, Joshua, Daniel A. Spielman, and Nikhil Srivastava. 2008. arXiv:0808.0163 [Cs], August.
Bauckhage, Christian. 2015. arXiv:1512.07548 [Stat], December.
Berry, Michael W., Murray Browne, Amy N. Langville, V. Paul Pauca, and Robert J. Plemmons. 2007. Computational Statistics & Data Analysis 52 (1): 155–73.
Bertin, N., R. Badeau, and E. Vincent. 2010. IEEE Transactions on Audio, Speech, and Language Processing 18 (3): 538–49.
Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008a. In 3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008, 762–67.
———. 2008b. IEEE Transactions on Information Theory 54 (11): 4813–20.
Buch, Michael, Elio Quinton, and Bob L Sturm. 2017. “NichtnegativeMatrixFaktorisierungnutzendesKlangsynthesenSystem (NiMFKS): Extensions of NMF-Based Concatenative Sound Synthesis.” In Proceedings of the 20th International Conference on Digital Audio Effects, 7. Edinburgh.
Caetano, Marcelo, and Xavier Rodet. 2013. IEEE Transactions on Audio, Speech, and Language Processing 21 (8): 1666–75.
Cao, Bin, Dou Shen, Jian-Tao Sun, Xuanhui Wang, Qiang Yang, and Zheng Chen. n.d. In.
Carabias-Orti, J. J., T. Virtanen, P. Vera-Candeas, N. Ruiz-Reyes, and F. J. Canadas-Quesada. 2011. IEEE Journal of Selected Topics in Signal Processing 5 (6): 1144–58.
Chi, Yuejie, Yue M. Lu, and Yuxin Chen. 2019. IEEE Transactions on Signal Processing 67 (20): 5239–69.
Cichocki, A., N. Lee, I. V. Oseledets, A.-H. Phan, Q. Zhao, and D. Mandic. 2016. arXiv:1609.00893 [Cs], September.
Cichocki, A., R. Zdunek, and S. Amari. 2006. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, 5:V–.
Cohen, Albert, Ingrid Daubechies, and Jean-Christophe Feauveau. 1992. Communications on Pure and Applied Mathematics 45 (5): 485–560.
Combettes, Patrick L., and Jean-Christophe Pesquet. 2008. Inverse Problems 24 (6): 065014.
Dasarathy, Gautam, Parikshit Shah, Badri Narayan Bhaskar, and Robert Nowak. 2013. arXiv:1303.6544 [Cs, Math], March.
Dasgupta, Sanjoy, and Anupam Gupta. 2003. Random Structures & Algorithms 22 (1): 60–65.
Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. 2016. In Advances In Neural Information Processing Systems.
Desai, A., M. Ghashami, and J. M. Phillips. 2016. IEEE Transactions on Knowledge and Data Engineering 28 (7): 1678–90.
Devarajan, Karthik. 2008. PLoS Comput Biol 4 (7): e1000029.
Ding, C., X. He, and H. Simon. 2005. In Proceedings of the 2005 SIAM International Conference on Data Mining, 606–10. Proceedings. Society for Industrial and Applied Mathematics.
Ding, C., Tao Li, and M.I. Jordan. 2010. IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (1): 45–55.
Dokmanić, Ivan, and Rémi Gribonval. 2017. arXiv:1706.08701 [Cs, Math], June.
Driedger, Jonathan, and Thomas Pratzlich. 2015. In Proceedings of ISMIR, 7. Malaga.
Drineas, Petros, and Michael W. Mahoney. 2005. Journal of Machine Learning Research 6 (December): 2153–75.
Dueck, Delbert, Quaid D. Morris, and Brendan J. Frey. 2005. Bioinformatics 21 (suppl 1): i144–51.
Ellis, Robert L., and David C. Lay. 1992. Linear Algebra and Its Applications 173 (August): 19–38.
Fairbanks, James P., Ramakrishnan Kannan, Haesun Park, and David A. Bader. 2015. Parallel Computing, Graph analysis for scientific discovery, 47 (August): 38–50.
Févotte, Cédric, Nancy Bertin, and Jean-Louis Durrieu. 2008. Neural Computation 21 (3): 793–830.
Flammia, Steven T., David Gross, Yi-Kai Liu, and Jens Eisert. 2012. New Journal of Physics 14 (9): 095022.
Fung, Wai Shing, Ramesh Hariharan, Nicholas J.A. Harvey, and Debmalya Panigrahi. 2011. In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, 71–80. STOC ’11. New York, NY, USA: ACM.
Gemulla, Rainer, Erik Nijkamp, Peter J. Haas, and Yannis Sismanis. 2011. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 69–77. KDD ’11. New York, NY, USA: ACM.
Ghashami, Mina, Edo Liberty, Jeff M. Phillips, and David P. Woodruff. 2015. arXiv:1501.01711 [Cs], January.
Gross, D. 2011. IEEE Transactions on Information Theory 57 (3): 1548–66.
Gross, David, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, and Jens Eisert. 2010. Physical Review Letters 105 (15).
Grosse, Roger, Ruslan R. Salakhutdinov, William T. Freeman, and Joshua B. Tenenbaum. 2012. In Proceedings of the Conference on Uncertainty in Artificial Intelligence.
Guan, Naiyang, Dacheng Tao, Zhigang Luo, and Bo Yuan. 2012. IEEE Transactions on Signal Processing 60 (6): 2882–98.
Guan, N., D. Tao, Z. Luo, and B. Yuan. 2012. IEEE Transactions on Neural Networks and Learning Systems 23 (7): 1087–99.
Hackbusch, Wolfgang. 2015. Hierarchical Matrices: Algorithms and Analysis. 1st ed. Springer Series in Computational Mathematics 49. Heidelberg New York Dordrecht London: Springer Publishing Company, Incorporated.
Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. 2009. arXiv:0909.4061 [Math], September.
Hassanieh, Haitham, Piotr Indyk, Dina Katabi, and Eric Price. 2012. In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, 563–78. STOC ’12. New York, NY, USA: ACM.
Hassanieh, H., P. Indyk, D. Katabi, and E. Price. 2012. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 1183–94. Proceedings. Kyoto, Japan: Society for Industrial and Applied Mathematics.
Heinig, Georg, and Karla Rost. 2011. Linear Algebra and Its Applications 435 (1): 1–59.
Hoffman, Matthew D, David M Blei, and Perry R Cook. 2010. In International Conference on Machine Learning, 8.
Hoffman, Matthew, Francis R. Bach, and David M. Blei. 2010. In Advances in Neural Information Processing Systems, 856–64.
Hoyer, P.O. 2002. In Proceedings of the 2002 12th IEEE Workshop on Neural Networks for Signal Processing, 2002, 557–65.
Hsieh, Cho-Jui, and Inderjit S. Dhillon. 2011. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1064–72. KDD ’11. New York, NY, USA: ACM.
Hu, Tao, Cengiz Pehlevan, and Dmitri B. Chklovskii. 2014. In 2014 48th Asilomar Conference on Signals, Systems and Computers.
Huang, G., M. Kaess, and J. J. Leonard. 2013. In 2013 European Conference on Mobile Robots (ECMR), 150–57.
Iliev, Filip L., Valentin G. Stanev, Velimir V. Vesselinov, and Boian S. Alexandrov. 2018. PLOS ONE 13 (3): e0193974.
Kannan, Ramakrishnan. 2016. April.
Kannan, Ramakrishnan, Grey Ballard, and Haesun Park. 2016. In Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 9:1–11. PPoPP ’16. New York, NY, USA: ACM.
Keriven, Nicolas, Anthony Bourrier, Rémi Gribonval, and Patrick Pérez. 2016. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6190–94.
Keshava, Nirmal. 2003. Lincoln Laboratory Journal 14 (1): 55–78.
Khoromskij, B. N., A. Litvinenko, and H. G. Matthies. 2009. Computing 84 (1-2): 49–67.
Kim, H., and H. Park. 2008. SIAM Journal on Matrix Analysis and Applications 30 (2): 713–30.
Koren, Yehuda, Robert Bell, and Chris Volinsky. 2009. Computer 42 (8): 30–37.
Koutis, Ioannis, Gary L. Miller, and Richard Peng. 2012. Communications of the ACM 55 (10): 99–107.
Kruskal, J. B. 1964. Psychometrika 29 (2): 115–29.
Kumar, N. Kishore, and Jan Shneider. 2016. arXiv:1606.06511 [Cs, Math], June.
Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. arXiv:1607.04331 [Cs, q-Bio, Stat], July.
Lawrence, Neil D., and Raquel Urtasun. 2009. In Proceedings of the 26th Annual International Conference on Machine Learning, 601–8. ICML ’09. New York, NY, USA: ACM.
Lee, Daniel D., and H. Sebastian Seung. 1999. Nature 401 (6755): 788–91.
———. 2001. In Advances in Neural Information Processing Systems 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 556–62. MIT Press.
Li, Chi-Kwong, and Edward Poon. 2002. Linear and Multilinear Algebra 50 (4): 321–26.
Li, S.Z., XinWen Hou, HongJiang Zhang, and Qiansheng Cheng. 2001. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. CVPR 2001, 1:I-207-I-212 vol.1.
Liberty, Edo. 2013. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 581–88. KDD ’13. New York, NY, USA: ACM.
Liberty, Edo, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. 2007. Proceedings of the National Academy of Sciences 104 (51): 20167–72.
Lin, Chih-Jen. 2007. Neural Computation 19 (10): 2756–79.
Lin, Zhouchen. n.d.
Liu, Tongliang, Dacheng Tao, and Dong Xu. 2016. arXiv:1601.00238 [Cs, Stat], January.
Liu, T., and D. Tao. 2015. IEEE Transactions on Neural Networks and Learning Systems PP (99): 1–1.
López-Serrano, Patricio, Christian Dittmar, Yigitcan Özer, and Meinard Müller. 2019. “NMF Toolbox: Music Processing Applications of Nonnegative Matrix Factorization.” In.
Mailhé, Boris, Rémi Gribonval, Pierre Vandergheynst, and Frédéric Bimbot. 2011. Signal Processing, Advances in Multirate Filter Bank Structures and Multiscale Representations, 91 (12): 2822–35.
Mairal, Julien, Francis Bach, and Jean Ponce. 2014. Sparse Modeling for Image and Vision Processing. Vol. 8.
Mairal, Julien, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2009. In Proceedings of the 26th Annual International Conference on Machine Learning, 689–96. ICML ’09. New York, NY, USA: ACM.
———. 2010. The Journal of Machine Learning Research 11: 19–60.
Martinsson, Per-Gunnar. 2016. arXiv:1607.01649 [Math], July.
Martinsson, Per-Gunnar, Vladimir Rockhlin, and Mark Tygert. 2006. DTIC Document.
Mensch, Arthur, Julien Mairal, Bertrand Thirion, and Gael Varoquaux. 2017. arXiv:1701.05363 [Math, q-Bio, Stat], January.
Needell, Deanna, and Roman Vershynin. 2009. Foundations of Computational Mathematics 9 (3): 317–34.
Nowak, W., and A. Litvinenko. 2013. Mathematical Geosciences 45 (4): 411–35.
Oymak, Samet, and Joel A. Tropp. 2015. arXiv:1511.09433 [Cs, Math, Stat], November.
Paatero, Pentti, and Unto Tapper. 1994. Environmetrics 5 (2): 111–26.
Pan, Gang, Wangsheng Zhang, Zhaohui Wu, and Shijian Li. 2014. PLoS ONE 9 (7): e102799.
Rokhlin, Vladimir, Arthur Szlam, and Mark Tygert. 2009. SIAM J. Matrix Anal. Appl. 31 (3): 1100–1124.
Rokhlin, Vladimir, and Mark Tygert. 2008. Proceedings of the National Academy of Sciences 105 (36): 13212–17.
Ryabko, Daniil, and Boris Ryabko. 2010. IEEE Transactions on Information Theory 56 (3): 1430–35.
Schmidt, M.N., J. Larsen, and Fu-Tien Hsiao. 2007. In 2007 IEEE Workshop on Machine Learning for Signal Processing, 431–36.
Seshadhri, C., Aneesh Sharma, Andrew Stolman, and Ashish Goel. 2020. Proceedings of the National Academy of Sciences 117 (11): 5631–37.
Singh, Ajit P., and Geoffrey J. Gordon. 2008. In Machine Learning and Knowledge Discovery in Databases, 358–73. Springer.
Smaragdis, Paris. 2004. In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 494–99. Lecture Notes in Computer Science. Granada, Spain: Springer Berlin Heidelberg.
Soh, Yong Sheng, and Venkat Chandrasekaran. 2017. arXiv:1701.01207 [Cs, Math, Stat], January.
Sorzano, C. O. S., J. Vargas, and A. Pascual Montano. 2014. arXiv:1403.2877 [Cs, q-Bio, Stat], March.
Spielman, Daniel A., and Shang-Hua Teng. 2004. In Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, 81–90. STOC ’04. New York, NY, USA: ACM.
———. 2006. arXiv:cs/0607105, July.
———. 2008a. arXiv:0808.4134 [Cs], August.
———. 2008b. arXiv:0809.3232 [Cs], September.
Spielman, D., and N. Srivastava. 2011. SIAM Journal on Computing 40 (6): 1913–26.
Sra, Suvrit, and Inderjit S. Dhillon. 2006. In Advances in Neural Information Processing Systems 18, edited by Y. Weiss, B. Schölkopf, and J. C. Platt, 283–90. MIT Press.
Sun, Ying, and Michael L. Stein. 2016. Journal of Computational and Graphical Statistics 25 (1): 187–208.
Tropp, Joel A., Alp Yurtsever, Madeleine Udell, and Volkan Cevher. 2016. arXiv:1609.00048 [Cs, Math, Stat], August.
———. 2017. SIAM Journal on Matrix Analysis and Applications 38 (4): 1454–85.
Tufts, D. W., and R. Kumaresan. 1982. Proceedings of the IEEE 70 (9): 975–89.
Tung, Frederick, and James J. Little. n.d.
Türkmen, Ali Caner. 2015. arXiv:1507.03194 [Cs, Stat], July.
Turner, Richard E., and Maneesh Sahani. 2014. IEEE Transactions on Signal Processing 62 (23): 6171–83.
Udell, M., and A. Townsend. 2019. SIAM Journal on Mathematics of Data Science 1 (1): 144–60.
Vaz, Colin, Asterios Toutios, and Shrikanth S. Narayanan. 2016. In, 963–67.
Vincent, E., N. Bertin, and R. Badeau. 2008. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, 109–12.
Virtanen, T. 2007. IEEE Transactions on Audio, Speech, and Language Processing 15 (3): 1066–74.
Vishnoi, Nisheeth K. 2013. Lx = b.” Foundations and Trends® in Theoretical Computer Science 8 (1-2): 1–141.
Wager, S., L. Chen, M. Kim, and C. Raphael. 2017. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 391–95.
Wang, Boyue, Yongli Hu, Junbin Gao, Yanfeng Sun, Haoran Chen, and Baocai Yin. 2017. In PRoceedings of IJCAI, 2017.
Wang, Shusen, Alex Gittens, and Michael W. Mahoney. 2017. arXiv:1702.04837 [Cs, Stat], February.
Wang, Y. X., and Y. J. Zhang. 2013. IEEE Transactions on Knowledge and Data Engineering 25 (6): 1336–53.
Wang, Yuan, and Yunde Jia. 2004. “Fisher Non-Negative Matrix Factorization for Learning Local Features.” In In Proc. Asian Conf. On Comp. Vision, 27–30.
Wilkinson, William J., Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, and Arno Solin. 2019. arXiv:1901.11436 [Cs, Eess, Stat], January.
Woodruff, David P. 2014. Sketching as a Tool for Numerical Linear Algebra. Foundations and Trends in Theoretical Computer Science 1.0. Now Publishers.
Woolfe, Franco, Edo Liberty, Vladimir Rokhlin, and Mark Tygert. 2008. Applied and Computational Harmonic Analysis 25 (3): 335–66.
Yang, Jiyan, Xiangrui Meng, and Michael W. Mahoney. 2015. arXiv:1502.03032 [Cs, Math, Stat], February.
Yang, Wenzhuo, and Huan Xu. 2015. In Journal of Machine Learning Research, 494–503.
Ye, Ke, and Lek-Heng Lim. 2016. Foundations of Computational Mathematics 16 (3): 577–98.
Yin, M., J. Gao, and Z. Lin. 2016. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3): 504–17.
Yoshii, Kazuyoshi. 2013. “Beyond NMF: Time-Domain Audio Source Separation Without Phase Reconstruction,” 6.
Yu, Chenhan D., William B. March, and George Biros. 2017. In arXiv:1701.02324 [Cs].
Yu, Hsiang-Fu, Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon. 2012. In IEEE International Conference of Data Mining, 765–74.
———. 2014. Knowledge and Information Systems 41 (3): 793–819.
Zass, Ron, and Amnon Shashua. 2005. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 - Volume 01, 294–301. ICCV ’05. Washington, DC, USA: IEEE Computer Society.
Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM.
Zhang, Xiao, Lingxiao Wang, and Quanquan Gu. 2017. arXiv:1701.00481 [Stat], January.
Zhang, Zhongyuan, Chris Ding, Tao Li, and Xiangsun Zhang. 2007. In Seventh IEEE International Conference on Data Mining, 2007. ICDM 2007, 391–400. IEEE.
Zhou, Tianyi, and Dacheng Tao. 2011.
———. 2012. Journal of Machine Learning Research.
Zitnik, Marinka, and Blaz Zupan. 2018. arXiv:1808.01743 [Cs, q-Bio, Stat], August.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.