Randomised linear algebra



The twin to random matrices, and elder sibling of vector random projections. Notes on doing linear algebra operations using randomised matrix projections. Useful for, e.g. randomised regression.

Quite an old family of methods, but extra hot recently.

Obligatory Igor Carron mention: Random matrices are too damn large.

IBM had a research group in this although they seem to have gone silent since a good year in 2014.

Martinsson (2016) seems to be a fresh review of the action.

πŸ— make coherent.

Implementations

Random regression

See randomised regression

References

Achlioptas, Dimitris. 2003. β€œDatabase-Friendly Random Projections: Johnson-Lindenstrauss with Binary Coins.” Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87.
Achlioptas, Dimitris, and Frank Mcsherry. 2007. β€œFast Computation of Low-Rank Matrix Approximations.” J. ACM 54 (2).
Alaoui, Ahmed El, and Michael W. Mahoney. 2014. β€œFast Randomized Kernel Methods With Statistical Guarantees.” arXiv:1411.0306 [Cs, Stat], November.
Bingham, Ella, and Heikki Mannila. 2001. β€œRandom Projection in Dimensionality Reduction: Applications to Image and Text Data.” In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245–50. KDD ’01. New York, NY, USA: ACM.
Choromanski, Krzysztof, Mark Rowland, and Adrian Weller. 2017. β€œThe Unreasonable Effectiveness of Random Orthogonal Embeddings.” arXiv:1703.00864 [Stat], March.
Dasgupta, Sanjoy. 2000. β€œExperiments with Random Projection.” In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Dasgupta, Sanjoy, and Anupam Gupta. 2003. β€œAn Elementary Proof of a Theorem of Johnson and Lindenstrauss.” Random Structures & Algorithms 22 (1): 60–65.
Dezfouli, Amir, and Edwin V. Bonilla. 2015. β€œScalable Inference for Gaussian Process Models with Black-Box Likelihoods.” In Advances in Neural Information Processing Systems 28, 1414–22. NIPS’15. Cambridge, MA, USA: MIT Press.
Drineas, Petros, and Michael W. Mahoney. 2005. β€œOn the NystrΓΆm Method for Approximating a Gram Matrix for Improved Kernel-Based Learning.” Journal of Machine Learning Research 6 (December): 2153–75.
Erichson, N. Benjamin, Sergey Voronin, Steven L. Brunton, and J. Nathan Kutz. 2016. β€œRandomized Matrix Decompositions Using R.” arXiv:1608.02148 [Cs, Stat], August.
Gottwald, Georg A., and Sebastian Reich. 2020. β€œSupervised Learning from Noisy Observations: Combining Machine-Learning Techniques with Data Assimilation.” arXiv:2007.07383 [Physics, Stat], July.
Gower, Robert M. 2016. β€œSketch and Project: Randomized Iterative Methods for Linear Systems and Inverting Matrices.” arXiv:1612.06013 [Math], December.
Gower, Robert M., and Peter RichtΓ‘rik. 2016. β€œRandomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms.” arXiv:1602.01768 [Cs, Math], February.
Halko, Nathan, Per-Gunnar Martinsson, and Joel A. Tropp. 2009. β€œFinding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions.” arXiv:0909.4061 [Math], September.
Hesslow, Daniel, Alessandro Cappelli, Igor Carron, Laurent Daudet, RaphaΓ«l Lafargue, Kilian MΓΌller, Ruben Ohana, Gustave Pariente, and Iacopo Poli. 2021. β€œPhotonic Co-Processors in HPC: Using LightOn OPUs for Randomized Numerical Linear Algebra.” arXiv:2104.14429 [Cs, Stat], May.
Hutchinson, M.F. 1990. β€œA Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines.” Communications in Statistics - Simulation and Computation 19 (2): 433–50.
Kapralov, Michael, Vamsi K. Potluru, and David P. Woodruff. 2016. β€œHow to Fake Multiply by a Gaussian Matrix.” arXiv:1606.05732 [Cs], June.
Krummenacher, Gabriel, Brian McWilliams, Yannic Kilcher, Joachim M Buhmann, and Nicolai Meinshausen. 2016. β€œScalable Adaptive Stochastic Optimization Using Random Projections.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1750–58. Curran Associates, Inc.
Kumar, N. Kishore, and Jan Shneider. 2016. β€œLiterature Survey on Low Rank Approximation of Matrices.” arXiv:1606.06511 [Cs, Math], June.
Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. β€œRandom Projections of Random Manifolds.” arXiv:1607.04331 [Cs, q-Bio, Stat], July.
Launay, Julien, Iacopo Poli, FranΓ§ois Boniface, and Florent Krzakala. 2020. β€œDirect Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures.” In Advances in Neural Information Processing Systems, 33:15.
Lee, Daniel D., and H. Sebastian Seung. 2001. β€œAlgorithms for Non-Negative Matrix Factorization.” In Advances in Neural Information Processing Systems 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 556–62. MIT Press.
Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. β€œVery Sparse Random Projections.” In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287–96. KDD ’06. New York, NY, USA: ACM.
Liberty, Edo, Franco Woolfe, Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. 2007. β€œRandomized Algorithms for the Low-Rank Approximation of Matrices.” Proceedings of the National Academy of Sciences 104 (51): 20167–72.
Mahoney, Michael W. 2010. Randomized Algorithms for Matrices and Data. Vol. 3.
Martinsson, Per-Gunnar. 2016. β€œRandomized Methods for Matrix Computations and Analysis of High Dimensional Data.” arXiv:1607.01649 [Math], July.
Martinsson, Per-Gunnar, Vladimir Rockhlin, and Mark Tygert. 2006. β€œA Randomized Algorithm for the Approximation of Matrices.” DTIC Document.
Martinsson, Per-Gunnar, and Sergey Voronin. 2015. β€œA Randomized Blocked Algorithm for Efficiently Computing Rank-Revealing Factorizations of Matrices.” arXiv:1503.07157 [Math], March.
Pourkamali-Anaraki, Farhad, and Stephen Becker. 2016. β€œA Randomized Approach to Efficient Kernel Clustering.” arXiv:1608.07597 [Stat], August.
Rahimi, Ali, and Benjamin Recht. 2007. β€œRandom Features for Large-Scale Kernel Machines.” In Advances in Neural Information Processing Systems, 1177–84. Curran Associates, Inc.
Refinetti, Maria, StΓ©phane d’Ascoli, Ruben Ohana, and Sebastian Goldt. 2021. β€œAlign, Then Memorise: The Dynamics of Learning with Feedback Alignment.” arXiv:2011.12428 [Cond-Mat, Stat], June.
Rokhlin, Vladimir, Arthur Szlam, and Mark Tygert. 2009. β€œA Randomized Algorithm for Principal Component Analysis.” SIAM J. Matrix Anal. Appl. 31 (3): 1100–1124.
Rokhlin, Vladimir, and Mark Tygert. 2008. β€œA Fast Randomized Algorithm for Overdetermined Linear Least-Squares Regression.” Proceedings of the National Academy of Sciences 105 (36): 13212–17.
Saibaba, Arvind K., Alen Alexanderian, and Ilse C. F. Ipsen. 2016. β€œRandomized Matrix-Free Trace and Log-Determinant Estimators.” arXiv:1605.04893 [Math], May.
Sterge, Nicholas, and Bharath Sriperumbudur. 2021. β€œStatistical Optimality and Computational Efficiency of NystrΓΆm Kernel PCA.” arXiv:2105.08875 [Cs, Math, Stat], May.
Tropp, Joel A. 2015. An Introduction to Matrix Concentration Inequalities.
Tropp, Joel A., Alp Yurtsever, Madeleine Udell, and Volkan Cevher. 2016. β€œRandomized Single-View Algorithms for Low-Rank Matrix Approximation.” arXiv:1609.00048 [Cs, Math, Stat], August.
Voronin, Sergey, and Per-Gunnar Martinsson. 2014. β€œEfficient Algorithms for CUR and Interpolative Matrix Decompositions.” arXiv:1412.8447 [Cs, Math], December.
Wang, Shusen. 2015. β€œA Practical Guide to Randomized Matrix Computations with MATLAB Implementations.” arXiv:1505.07570 [Cs], May.
Woolfe, Franco, Edo Liberty, Vladimir Rokhlin, and Mark Tygert. 2008. β€œA Fast Randomized Algorithm for the Approximation of Matrices.” Applied and Computational Harmonic Analysis 25 (3): 335–66.
Yang, Jiyan, Xiangrui Meng, and Michael W. Mahoney. 2015. β€œImplementing Randomized Matrix Algorithms in Parallel and Distributed Environments.” arXiv:1502.03032 [Cs, Math, Stat], February.
Yang, Jiyan, Vikas Sindhwani, Haim Avron, and Michael Mahoney. 2014. β€œQuasi-Monte Carlo Feature Maps for Shift-Invariant Kernels.” arXiv:1412.8293 [Cs, Math, Stat], December.
Yang, Tianbao, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. 2012. β€œNystrΓΆm Method Vs Random Fourier Features: A Theoretical and Empirical Comparison.” In Advances in Neural Information Processing Systems, 476–84.
Yu, Hsiang-Fu, Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon. 2012. β€œScalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems.” In IEEE International Conference of Data Mining, 765–74.
Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. β€œRandomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM.
Zhao, Liang. 2017. β€œFast Algorithms on Random Matrices and Structured Matrices.”
Zhou, Tianyi, and Dacheng Tao. 2011. β€œGodec: Randomized Low-Rank & Sparse Matrix Decomposition in Noisy Case.”

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.