Yes another, lesser-known (?) way of scaling GP regression: a committee of observations subsets (Cao and Fleet 2015; Deisenroth and Ng 2015; Rullière et al. 2018; Tresp 2000).
References
Cao, Yanshuai, and David J. Fleet. 2015. “Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions.” arXiv.
Deisenroth, Marc, and Jun Wei Ng. 2015. “Distributed Gaussian Processes.” In Proceedings of the 32nd International Conference on Machine Learning, 1481–90. Lille, France: PMLR.
Jimenez, Felix, and Matthias Katzfuss. 2023. “Scalable Bayesian Optimization Using Vecchia Approximations of Gaussian Processes.” In Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, 1492–512. PMLR.
Liu, Haitao, Jianfei Cai, Yi Wang, and Yew-Soon Ong. 2018. “Generalized Robust Bayesian Committee Machine for Large-Scale Gaussian Process Regression,” June.
Rullière, Didier, Nicolas Durrande, François Bachoc, and Clément Chevalier. 2018. “Nested Kriging Predictions for Datasets with a Large Number of Observations.” Statistics and Computing 28 (4): 849–67.
Tresp, Volker. 2000. “A Bayesian Committee Machine.” Neural Computation 12 (11): 2719–41.
No comments yet. Why not leave one?