Randomised linear algebra

The twin to random matrices, and elder sibling of vector random projections. Notes on doing linear algebra operations using randomised matrix projections. Useful for, e.g. randomised regression.

Quite an old family of methods, but extra hot recently.

Obligatory Igor Carron mention: Random matrices are too damn large.

IBM had a research group in this although they seem to have gone silent since a good year in 2014.

Martinsson (2016) seems to be a fresh review of the action.

🏗 make coherent.


Random regression

See randomised regression


Achlioptas, Dimitris. 2003. “Database-Friendly Random Projections: Johnson-Lindenstrauss with Binary Coins.” Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87. https://doi.org/10.1016/S0022-0000(03)00025-4.
Achlioptas, Dimitris, and Frank Mcsherry. 2007. “Fast Computation of Low-Rank Matrix Approximations.” J. ACM 54 (2). https://doi.org/10.1145/1219092.1219097.
Alaoui, Ahmed El, and Michael W. Mahoney. 2014. “Fast Randomized Kernel Methods With Statistical Guarantees.” arXiv:1411.0306 [cs, Stat], November. http://arxiv.org/abs/1411.0306.
Bingham, Ella, and Heikki Mannila. 2001. “Random Projection in Dimensionality Reduction: Applications to Image and Text Data.” In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245–50. KDD ’01. New York, NY, USA: ACM. https://doi.org/10.1145/502512.502546.
Choromanski, Krzysztof, Mark Rowland, and Adrian Weller. 2017. “The Unreasonable Effectiveness of Random Orthogonal Embeddings.” arXiv:1703.00864 [stat], March. http://arxiv.org/abs/1703.00864.
Dasgupta, Sanjoy. 2000. “Experiments with Random Projection.” In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1301.3849.
Dasgupta, Sanjoy, and Anupam Gupta. 2003. “An Elementary Proof of a Theorem of Johnson and Lindenstrauss.” Random Structures & Algorithms 22 (1): 60–65. https://doi.org/10.1002/rsa.10073.
Dezfouli, Amir, and Edwin V. Bonilla. 2015. “Scalable Inference for Gaussian Process Models with Black-Box Likelihoods.” In Advances in Neural Information Processing Systems 28, 1414–22. NIPS’15. Cambridge, MA, USA: MIT Press. http://dl.acm.org/citation.cfm?id=2969239.2969397.
Drineas, Petros, and Michael W. Mahoney. 2005. “On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning.” Journal of Machine Learning Research 6 (December): 2153–75. http://jmlr.org/papers/volume6/drineas05a/drineas05a.pdf.
Erichson, N. Benjamin, Sergey Voronin, Steven L. Brunton, and J. Nathan Kutz. 2016. “Randomized Matrix Decompositions Using R.” arXiv:1608.02148 [cs, Stat], August. http://arxiv.org/abs/1608.02148.
Gottwald, Georg A., and Sebastian Reich. 2020. “Supervised Learning from Noisy Observations: Combining Machine-Learning Techniques with Data Assimilation.” arXiv:2007.07383 [physics, Stat], July. http://arxiv.org/abs/2007.07383.
Gower, Robert M. 2016. “Sketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices.” arXiv:1612.06013 [math], December. http://arxiv.org/abs/1612.06013.
Gower, Robert M., and Peter Richtárik. 2016. “Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms.” arXiv:1602.01768 [cs, Math], February. http://arxiv.org/abs/1602.01768.
Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. 2009. “Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions.” arXiv:0909.4061 [math], September. http://arxiv.org/abs/0909.4061.
Hesslow, Daniel, Alessandro Cappelli, Igor Carron, Laurent Daudet, Raphaël Lafargue, Kilian Müller, Ruben Ohana, Gustave Pariente, and Iacopo Poli. 2021. “Photonic Co-Processors in HPC: Using LightOn OPUs for Randomized Numerical Linear Algebra.” arXiv:2104.14429 [cs, Stat], May. http://arxiv.org/abs/2104.14429.
Hutchinson, M.F. 1990. “A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines.” Communications in Statistics - Simulation and Computation 19 (2): 433–50. https://doi.org/10.1080/03610919008812866.
Kapralov, Michael, Vamsi K. Potluru, and David P. Woodruff. 2016. “How to Fake Multiply by a Gaussian Matrix.” arXiv:1606.05732 [cs], June. http://arxiv.org/abs/1606.05732.
Krummenacher, Gabriel, Brian McWilliams, Yannic Kilcher, Joachim M Buhmann, and Nicolai Meinshausen. 2016. “Scalable Adaptive Stochastic Optimization Using Random Projections.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1750–58. Curran Associates, Inc. http://papers.nips.cc/paper/6054-scalable-adaptive-stochastic-optimization-using-random-projections.pdf.
Kumar, N. Kishore, and Jan Shneider. 2016. “Literature Survey on Low Rank Approximation of Matrices.” arXiv:1606.06511 [cs, Math], June. http://arxiv.org/abs/1606.06511.
Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. “Random Projections of Random Manifolds.” arXiv:1607.04331 [cs, q-Bio, Stat], July. http://arxiv.org/abs/1607.04331.
Launay, Julien, Iacopo Poli, François Boniface, and Florent Krzakala. 2020. “Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures.” In Advances in Neural Information Processing Systems, 33:15. http://arxiv.org/abs/2006.12878.
Lee, Daniel D., and H. Sebastian Seung. 2001. “Algorithms for Non-Negative Matrix Factorization.” In Advances in Neural Information Processing Systems 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 556–62. MIT Press. http://papers.nips.cc/paper/1861-algorithms-for-non-negative-matrix-factorization.pdf.
Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. “Very Sparse Random Projections.” In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287–96. KDD ’06. New York, NY, USA: ACM. https://doi.org/10.1145/1150402.1150436.
Liberty, Edo, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. 2007. “Randomized Algorithms for the Low-Rank Approximation of Matrices.” Proceedings of the National Academy of Sciences 104 (51): 20167–72. https://doi.org/10.1073/pnas.0709640104.
Mahoney, Michael W. 2010. Randomized Algorithms for Matrices and Data. Vol. 3. https://doi.org/10.1561/2200000035.
Martinsson, Per-Gunnar. 2016. “Randomized Methods for Matrix Computations and Analysis of High Dimensional Data.” arXiv:1607.01649 [math], July. http://arxiv.org/abs/1607.01649.
Martinsson, Per-Gunnar, Vladimir Rockhlin, and Mark Tygert. 2006. “A Randomized Algorithm for the Approximation of Matrices.” DTIC Document. http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=ADA458932.
Martinsson, Per-Gunnar, and Sergey Voronin. 2015. “A Randomized Blocked Algorithm for Efficiently Computing Rank-Revealing Factorizations of Matrices.” arXiv:1503.07157 [math], March. http://arxiv.org/abs/1503.07157.
Pourkamali-Anaraki, Farhad, and Stephen Becker. 2016. “A Randomized Approach to Efficient Kernel Clustering.” arXiv:1608.07597 [stat], August. http://arxiv.org/abs/1608.07597.
Rahimi, Ali, and Benjamin Recht. 2007. “Random Features for Large-Scale Kernel Machines.” In Advances in Neural Information Processing Systems, 1177–84. Curran Associates, Inc. http://papers.nips.cc/paper/3182-random-features-for-large-scale-kernel-machines.
Refinetti, Maria, Stéphane d’Ascoli, Ruben Ohana, and Sebastian Goldt. 2021. “Align, Then Memorise: The Dynamics of Learning with Feedback Alignment.” arXiv:2011.12428 [cond-Mat, Stat], June. http://arxiv.org/abs/2011.12428.
Rokhlin, Vladimir, Arthur Szlam, and Mark Tygert. 2009. “A Randomized Algorithm for Principal Component Analysis.” SIAM J. Matrix Anal. Appl. 31 (3): 1100–1124. https://doi.org/10.1137/080736417.
Rokhlin, Vladimir, and Mark Tygert. 2008. “A Fast Randomized Algorithm for Overdetermined Linear Least-Squares Regression.” Proceedings of the National Academy of Sciences 105 (36): 13212–17. https://doi.org/10.1073/pnas.0804869105.
Saibaba, Arvind K., Alen Alexanderian, and Ilse C. F. Ipsen. 2016. “Randomized Matrix-Free Trace and Log-Determinant Estimators.” arXiv:1605.04893 [math], May. http://arxiv.org/abs/1605.04893.
Sterge, Nicholas, and Bharath Sriperumbudur. 2021. “Statistical Optimality and Computational Efficiency of Nyström Kernel PCA.” arXiv:2105.08875 [cs, Math, Stat], May. http://arxiv.org/abs/2105.08875.
Tropp, Joel A. 2015. An Introduction to Matrix Concentration Inequalities. http://arxiv.org/abs/1501.01571.
Tropp, Joel A., Alp Yurtsever, Madeleine Udell, and Volkan Cevher. 2016. “Randomized Single-View Algorithms for Low-Rank Matrix Approximation.” arXiv:1609.00048 [cs, Math, Stat], August. http://arxiv.org/abs/1609.00048.
Voronin, Sergey, and Per-Gunnar Martinsson. 2014. “Efficient Algorithms for CUR and Interpolative Matrix Decompositions.” arXiv:1412.8447 [cs, Math], December. http://arxiv.org/abs/1412.8447.
Wang, Shusen. 2015. “A Practical Guide to Randomized Matrix Computations with MATLAB Implementations.” arXiv:1505.07570 [cs], May. http://arxiv.org/abs/1505.07570.
Woolfe, Franco, Edo Liberty, Vladimir Rokhlin, and Mark Tygert. 2008. “A Fast Randomized Algorithm for the Approximation of Matrices.” Applied and Computational Harmonic Analysis 25 (3): 335–66. https://doi.org/10.1016/j.acha.2007.12.002.
Yang, Jiyan, Xiangrui Meng, and Michael W. Mahoney. 2015. “Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments.” arXiv:1502.03032 [cs, Math, Stat], February. http://arxiv.org/abs/1502.03032.
Yang, Jiyan, Vikas Sindhwani, Haim Avron, and Michael Mahoney. 2014. “Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels.” arXiv:1412.8293 [cs, Math, Stat], December. http://arxiv.org/abs/1412.8293.
Yang, Tianbao, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. 2012. “Nyström Method Vs Random Fourier Features: A Theoretical and Empirical Comparison.” In Advances in Neural Information Processing Systems, 476–84. http://papers.nips.cc/paper/4588-nystrom-method-vs-random-fourier-features-a-theoretical-and-empirical-comparison.
Yu, Hsiang-Fu, Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon. 2012. “Scalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems.” In IEEE International Conference of Data Mining, 765–74. https://doi.org/10.1109/ICDM.2012.168.
Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. “Randomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3097983.3098050.
Zhao, Liang. 2017. “Fast Algorithms on Random Matrices and Structured Matrices.” http://academicworks.cuny.edu/gc_etds/2073/.
Zhou, Tianyi, and Dacheng Tao. 2011. “Godec: Randomized Low-Rank & Sparse Matrix Decomposition in Noisy Case.” http://www.icml-2011.org/papers/41_icmlpaper.pdf.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.