Gaussian processes are stochastic processes/fields with jointly Gaussian distributions of observations. In machine learning these models are used often as a means of regression or classification. They provide nonparametric method of inferring regression functions, with a conveniently Bayesian interpretation and reasonably elegant learning and inference steps. I would further add that this is the crystal meth of machine learning methods, in terms of the addictiveness, and of passion of the people who use it.

The central trick is using a clever union of
Hilbert spaces and probability
to give a probabilistic interpretation of
functional regression as a kind of
nonparametric Bayesian posterior inference via
representer theorems
where one gets posterior distributions over functions.
Regression using Gaussian processes is common
e.g. spatial statistics
where it arises as *kriging*.

This web site aims to provide an overview of resources concerned with probabilistic modeling, inference and learning based on Gaussian processes. Although Gaussian processes have a long history in the field of statistics, they seem to have been employed extensively only in niche areas. With the advent of kernel machines in the machine learning community, models based on Gaussian processes have become commonplace for problems of regression (kriging) and classification as well as a host of more specialized applications.

I’ve not been enthusiastic about these in the past. It’s nice to have a principled nonparametric Bayesian formalism, but it has always seemed pointless having a formalism that is so computationally demanding that people don’t try to use more than a thousand data points, or spend most of a paper working out how to approximate this simple elegant model with a complex messy model.

However, perhaps I should be persuaded by tricks such as AutoGP (Krauth et al. 2016) which breaks some computational deadlocks by clever use of inducing variables and variational approximation to produce a compressed representation of the data with tractable inference and model selection, including kernel selection, and doing the whole thing in many dimensions simultaneously. There are other clever tricks like this one, e.g (Saatçi 2012) shows how to use a lattice structure for observations to make computation cheap.

## Quick intro

I am not the right guy to provide the canonical introduction, because it already exists. It is (Rasmussen and Williams 2006). But here is a quick simple special case sufficient to start from.

We work with a centred (i.e. mean-zero) process, in which case for every finite set \(\mathbf{f}:=\{f(t_k);k=1,\dots,K\}\) of realisations of that process, the joint distribution is centred Gaussian,

\[\begin{aligned}
\mathbf{f}(t)
&\sim \operatorname{GP}\left(0, \kappa(t, t';\mathbf{\theta})\right)
\\
p(\mathbf{f}) &=(2\pi )^{-{\frac {K}{2}}}\det({\boldsymbol {\mathrm{K} }})^{-{\frac {1}{2}}}\,e^{-{\frac{1}{2}}\mathbf {f}^{\!{\mathsf {T}}}{\boldsymbol {\mathrm{K} }}^{-1}\mathbf {f}}\\
&=\mathcal{N}(\mathbf{f};\mathbf{0},\textrm{K}).
\end{aligned}\]
where \(\mathrm{K}\) is the sample covariance matrix defined such that its
entries are given by \(\mathrm{K}_{jk}=\kappa(t_j,t_k).\)
In this case, we are specifying *only* the second moments and this is giving us
all the remaining properties of the process.
That is, the unobserved, continuous random function \(f\) generates realisations
\(\mathbf{f}\in\mathbb{R}^T\)
at a discrete times \(\mathbf{t}=t_1,t_2,\dots,t_T.\)

Now,

\[\begin{aligned} f(t) &\sim \operatorname{GP}\left(0, \kappa(t, t';\mathbf{\theta})\right) & \text{Prior} \\ \mathbf{y}|\mathbf{f} &\sim \prod_{k=1}^{T} p\left(y_{k} | f\left(t_{k}\right)\right) & \text{Likelihood} \end{aligned}\]

To begin with these will form a lattice \(\mathbf{t}=1,2,\dots,T.\)

We allow that the observations may be distinct from the realisations in that the realisations may be observed with some noise. The observation noise will be Gaussian also, in the sense that

\[ y=f(\mathbf{x})+\epsilon,\]

where

\[ \epsilon \sim \mathcal{N}\left(0, \sigma_{y}^{2}\right) \]

We refer to the set of observations as \(\mathbf{y}\in\mathbb{R}^T\). The data includes observations and coordinates, and is written \(\mathcal{D}:=\{(t_k, y_k)\}_{k=1,2,\dots,T}\).

The main insight is that the Gaussian prior is conjugate to the Gaussian likelihood, which means that the posterior distributions are also Gaussian. (Although it will no longer be centred.)

We can find a likelihood for the latent functions given the observations. by considering the joint distribution

\[ \begin{aligned} \left(\begin{array}{c}{\mathbf{y}} \\ {\mathbf{f}}\end{array}\right) \sim \mathcal{N}\left(\mathbf{0},\left(\begin{array}{cc}{\mathbf{K}_{y}} & {\mathbf{K}} \\ {\mathbf{K}^{T}} & {\mathbf{K}_{\mathbf{f}}}\end{array}\right)\right) \end{aligned} \]

## Density estimation

Can I infer a density using these? Yes. One popular method is apparently the logistic Gaussian process. (Tokdar 2007; Lenk 2003)

## Kernels

a.k.a. covariance models.

GP models are the meeting of Covariance estimation and kernel machines. The covariance covariance kernels are what makes this go.

## Using state filtering

When one dimension of the input vector can be interpreted as a time dimension we are Kalman filtering Gaussian Processes, which has benefits in terms of speed.

## On lattice observations

## By variational inference

🏗

## With inducing variables

“Sparse GP”. See Quiñonero-Candela and Rasmussen (2005). 🏗

## Approximation with dropout

Famously Gal and Ghahramani (2015) showsthat training a certain class of networks stochastically using dropout approximates Gaussian processes. Papers like Kasim et al. (2020) level that up, building massive networks that try to do cheap approximation using dropout. They claim to get remarkably good results by basically doing the simple and obvious things.

## As dimension reduction

e.g. GP-LVM (Lawrence 2005). 🏗

## Readings

This lecture by the late David Mackay is probably good; the man could talk.

There is also a well-illustrated and elementary introduction by Yuge Shi.

## Implementations

Bayes workhorse Stan can do Gaussian Process regression just like almost everything else; see Michael Betancourt’s blog, 1. 2. 3.

The current scikit-learn has basic Gaussian processes, and an introduction.

Gaussian Processes (GP) are a generic supervised learning method designed to solve regression and probabilistic classification problems.

The advantages of Gaussian processes are:

- The prediction interpolates the observations (at least for regular kernels).
- The prediction is probabilistic (Gaussian) so that one can compute empirical confidence intervals and decide based on those if one should refit (online fitting, adaptive fitting) the prediction in some region of interest.
- Versatile: different kernels can be specified. Common kernels are provided, but it is also possible to specify custom kernels.
The disadvantages of Gaussian processes include:

- They are not sparse, i.e., they use the whole samples/features information to perform the prediction.
- They lose efficiency in high dimensional spaces – namely when the number of features exceeds a few dozens.

Those disadvantages are dubious.
The first is *strictly* correct, but not useful, in that sparse approximate GPs is a whole industry.
The second is just wrong, unless I have misunderstood something.
Cost scaling should be linear in the dimension of the feature space, as with all other kernel methods
Thus scaling costs due to dimensionality of the features is swamped by the
scaling costs of the number of data points, AFAICT.
Inference is thus \(\mathcal{O}(DN^3)\) for \(N\) observations and \(D\) features.
Dimensionality cost is this not worse than linear regression for prediction and superior for training, although other models that have a linear complexity in sample dimension escape without such a warning.

There are fancier Gaussian process tool sets than this one, with less worrisome caveats. Chris Fonnesbeck mentions GPflow (using tensorflow), autogp (also using tensorflow), PyMC3, and the scikit-learn implementation. I think that GPy is a common default choice. There are several GP models in the [pytorch])pytorch.html)-based pyro. Stheno seems to be popular for Julia and also comes in an alternative flavour, python stheno. There is a rather similar looking GaussianProcesses.jl, although that last one seems to conflate model training and inference in an incovneient way so I have not used it. Plus I notice skgmm is a fancified version of the scikit-learn one. George is another python GP regression that claims to handle big data at the cost of lots of C++. GPStuff is the one for MATLAB/Octave that I have seen around the place. So… It’s easy enough to be bikeshedded is the message I’m getting here.

The GpFlow docs includes the following clarification of the genealogy of these toolkits.

GPflow has origins in GPy by the GPy contributors, and much of the interface is intentionally similar for continuity (though some parts of the interface may diverge in future). GPflow has a rather different remit from GPy though:

- GPflow leverages TensorFlow for faster/bigger computation
- GPflow has much less code than GPy, mostly because all gradient computation is handled by TensorFlow.
- GPflow focusses on variational inference and MCMC – there is no expectation propagation or Laplace approximation.
- GPflow does not have any plotting functionality.

Abrahamsen, Petter. 1997. “A Review of Gaussian Random Fields and Correlation Functions.” http://publications.nr.no/publications.nr.no/directdownload/publications.nr.no/rask/old/917_Rapport.pdf.

Abt, Markus, and William J. Welch. 1998. “Fisher Information and Maximum-Likelihood Estimation of Covariance Parameters in Gaussian Stochastic Processes.” *Canadian Journal of Statistics* 26 (1): 127–37. https://doi.org/10.2307/3315678.

Altun, Yasemin, Alex J. Smola, and Thomas Hofmann. 2004. “Exponential Families for Conditional Random Fields.” In *Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence*, 2–9. UAI ’04. Arlington, Virginia, United States: AUAI Press. http://arxiv.org/abs/1207.4131.

Alvarado, Pablo A., and Dan Stowell. 2018. “Efficient Learning of Harmonic Priors for Pitch Detection in Polyphonic Music,” November. http://arxiv.org/abs/1705.07104.

Birgé, Lucien, and Pascal Massart. 2006. “Minimal Penalties for Gaussian Model Selection.” *Probability Theory and Related Fields* 138 (1-2): 33–73. https://doi.org/10.1007/s00440-006-0011-8.

Bonilla, Edwin V., Kian Ming A. Chai, and Christopher K. I. Williams. 2007. “Multi-Task Gaussian Process Prediction.” In *Proceedings of the 20th International Conference on Neural Information Processing Systems*, 153–60. NIPS’07. USA: Curran Associates Inc. http://dl.acm.org/citation.cfm?id=2981562.2981582.

Bonilla, Edwin V., Karl Krauth, and Amir Dezfouli. 2016. “Generic Inference in Latent Gaussian Process Models,” September. http://arxiv.org/abs/1609.00577.

Borovitskiy, Viacheslav, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. 2020. “Matern Gaussian Processes on Riemannian Manifolds,” June. http://arxiv.org/abs/2006.10160.

Chen, Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. 2018. “Neural Ordinary Differential Equations.” In *Advances in Neural Information Processing Systems 31*, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 6572–83. Curran Associates, Inc. http://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf.

Csató, Lehel, Manfred Opper, and Ole Winther. 2001. “TAP Gibbs Free Energy, Belief Propagation and Sparsity.” In *Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic*, 657–63. NIPS’01. Cambridge, MA, USA: MIT Press. http://papers.nips.cc/paper/2027-tap-gibbs-free-energy-belief-propagation-and-sparsity.pdf.

Cunningham, John P., Krishna V. Shenoy, and Maneesh Sahani. 2008. “Fast Gaussian Process Methods for Point Process Intensity Estimation.” In *Proceedings of the 25th International Conference on Machine Learning*, 192–99. ICML ’08. New York, NY, USA: ACM Press. https://doi.org/10.1145/1390156.1390181.

Cutajar, Kurt, Edwin V. Bonilla, Pietro Michiardi, and Maurizio Filippone. 2017. “Random Feature Expansions for Deep Gaussian Processes.” In *PMLR*. http://proceedings.mlr.press/v70/cutajar17a.html.

Dahl, Astrid, and Edwin V. Bonilla. 2019. “Sparse Grouped Gaussian Processes for Solar Power Forecasting,” March. http://arxiv.org/abs/1903.03986.

Damianou, Andreas, and Neil Lawrence. 2013. “Deep Gaussian Processes.” In *Artificial Intelligence and Statistics*, 207–15. http://proceedings.mlr.press/v31/damianou13a.html.

Damianou, Andreas, Michalis K. Titsias, and Neil D. Lawrence. 2011. “Variational Gaussian Process Dynamical Systems.” In *Advances in Neural Information Processing Systems 24*, edited by J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, 2510–8. Curran Associates, Inc. http://papers.nips.cc/paper/4330-variational-gaussian-process-dynamical-systems.pdf.

Dezfouli, Amir, and Edwin V. Bonilla. 2015. “Scalable Inference for Gaussian Process Models with Black-Box Likelihoods.” In *Advances in Neural Information Processing Systems 28*, 1414–22. NIPS’15. Cambridge, MA, USA: MIT Press. http://dl.acm.org/citation.cfm?id=2969239.2969397.

Dunlop, Matthew M., Mark A. Girolami, Andrew M. Stuart, and Aretha L. Teckentrup. 2018. “How Deep Are Deep Gaussian Processes?” *Journal of Machine Learning Research* 19 (1): 2100–2145. http://jmlr.org/papers/v19/18-015.html.

Duvenaud, David. 2014. “Automatic Model Construction with Gaussian Processes.” PhD Thesis, University of Cambridge. https://github.com/duvenaud/phd-thesis.

Duvenaud, David, James Lloyd, Roger Grosse, Joshua Tenenbaum, and Ghahramani Zoubin. 2013. “Structure Discovery in Nonparametric Regression Through Compositional Kernel Search.” In *Proceedings of the 30th International Conference on Machine Learning (ICML-13)*, 1166–74. http://machinelearning.wustl.edu/mlpapers/papers/icml2013_duvenaud13.

Ebden, Mark. 2015. “Gaussian Processes: A Quick Introduction,” May. http://arxiv.org/abs/1505.02965.

Eleftheriadis, Stefanos, Tom Nicholson, Marc Deisenroth, and James Hensman. 2017. “Identification of Gaussian Process State Space Models.” In *Advances in Neural Information Processing Systems 30*, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 5309–19. Curran Associates, Inc. http://papers.nips.cc/paper/7115-identification-of-gaussian-process-state-space-models.pdf.

Emery, Xavier. 2007. “Conditioning Simulations of Gaussian Random Fields by Ordinary Kriging.” *Mathematical Geology* 39 (6): 607–23. https://doi.org/10.1007/s11004-007-9112-x.

Evgeniou, Theodoros, Charles A. Micchelli, and Massimiliano Pontil. 2005. “Learning Multiple Tasks with Kernel Methods.” *Journal of Machine Learning Research* 6 (Apr): 615–37. http://www.jmlr.org/papers/v6/evgeniou05a.html.

Ferguson, Thomas S. 1973. “A Bayesian Analysis of Some Nonparametric Problems.” *The Annals of Statistics* 1 (2): 209–30. https://doi.org/10.1214/aos/1176342360.

Föll, Roman, Bernard Haasdonk, Markus Hanselmann, and Holger Ulmer. 2017. “Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation,” November. http://arxiv.org/abs/1711.00799.

Frigola, Roger, Yutian Chen, and Carl Edward Rasmussen. 2014. “Variational Gaussian Process State-Space Models.” In *Advances in Neural Information Processing Systems 27*, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 3680–8. Curran Associates, Inc. http://papers.nips.cc/paper/5375-variational-gaussian-process-state-space-models.pdf.

Frigola, Roger, Fredrik Lindsten, Thomas B Schön, and Carl Edward Rasmussen. 2013. “Bayesian Inference and Learning in Gaussian Process State-Space Models with Particle MCMC.” In *Advances in Neural Information Processing Systems 26*, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 3156–64. Curran Associates, Inc. http://papers.nips.cc/paper/5085-bayesian-inference-and-learning-in-gaussian-process-state-space-models-with-particle-mcmc.pdf.

Gal, Yarin, and Zoubin Ghahramani. 2015. “Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning.” In *Proceedings of the 33rd International Conference on Machine Learning (ICML-16)*. http://arxiv.org/abs/1506.02142.

Gal, Yarin, and Mark van der Wilk. 2014. “Variational Inference in Sparse Gaussian Process Regression and Latent Variable Models - a Gentle Tutorial,” February. http://arxiv.org/abs/1402.1412.

Garnelo, Marta, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, and S. M. Ali Eslami. 2018. “Conditional Neural Processes,” July, 10. https://arxiv.org/abs/1807.01613v1.

Garnelo, Marta, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. 2018. “Neural Processes,” July. https://arxiv.org/abs/1807.01622v1.

Ghahramani, Zoubin. 2013. “Bayesian Non-Parametrics and the Probabilistic Approach to Modelling.” *Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences* 371 (1984): 20110553. https://doi.org/10.1098/rsta.2011.0553.

Gilboa, E., Y. Saatçi, and J. P. Cunningham. 2015. “Scaling Multidimensional Inference for Structured Gaussian Processes.” *IEEE Transactions on Pattern Analysis and Machine Intelligence* 37 (2): 424–36. https://doi.org/10.1109/TPAMI.2013.192.

Girolami, Mark, and Simon Rogers. 2005. “Hierarchic Bayesian Models for Kernel Learning.” In *Proceedings of the 22nd International Conference on Machine Learning - ICML ’05*, 241–48. Bonn, Germany: ACM Press. https://doi.org/10.1145/1102351.1102382.

Gratiet, Loïc Le, Stefano Marelli, and Bruno Sudret. 2016. “Metamodel-Based Sensitivity Analysis: Polynomial Chaos Expansions and Gaussian Processes.” In *Handbook of Uncertainty Quantification*, edited by Roger Ghanem, David Higdon, and Houman Owhadi, 1–37. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-11259-6_38-1.

Grosse, Roger, Ruslan R. Salakhutdinov, William T. Freeman, and Joshua B. Tenenbaum. 2012. “Exploiting Compositionality to Explore a Large Space of Model Structures.” In *Proceedings of the Conference on Uncertainty in Artificial Intelligence*. http://arxiv.org/abs/1210.4856.

Hartikainen, J., and S. Särkkä. 2010. “Kalman Filtering and Smoothing Solutions to Temporal Gaussian Process Regression Models.” In *2010 IEEE International Workshop on Machine Learning for Signal Processing*, 379–84. Kittila, Finland: IEEE. https://doi.org/10.1109/MLSP.2010.5589113.

Hensman, James, Nicolo Fusi, and Neil D. Lawrence. 2013. “Gaussian Processes for Big Data.” In *Uncertainty in Artificial Intelligence*, 282. Citeseer.

Huber, Marco F. 2014. “Recursive Gaussian Process: On-Line Regression and Learning.” *Pattern Recognition Letters* 45 (August): 85–91. https://doi.org/10.1016/j.patrec.2014.03.004.

Huggins, Jonathan H., Trevor Campbell, Mikołaj Kasprzak, and Tamara Broderick. 2018. “Scalable Gaussian Process Inference with Finite-Data Mean and Variance Guarantees,” June. http://arxiv.org/abs/1806.10234.

Jordan, Michael Irwin. 1999. *Learning in Graphical Models*. Cambridge, Mass.: MIT Press.

Karvonen, Toni, and Simo Särkkä. 2016. “Approximate State-Space Gaussian Processes via Spectral Transformation.” In *2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP)*, 1–6. Vietri sul Mare, Salerno, Italy: IEEE. https://doi.org/10.1109/MLSP.2016.7738812.

Kasim, M. F., D. Watson-Parris, L. Deaconu, S. Oliver, P. Hatfield, D. H. Froula, G. Gregori, et al. 2020. “Up to Two Billion Times Acceleration of Scientific Simulations with Deep Neural Architecture Search,” January. http://arxiv.org/abs/2001.08055.

Kingma, Diederik P., and Max Welling. 2014. “Auto-Encoding Variational Bayes.” In *ICLR 2014 Conference*. http://arxiv.org/abs/1312.6114.

Ko, Jonathan, and Dieter Fox. 2009. “GP-BayesFilters: Bayesian Filtering Using Gaussian Process Prediction and Observation Models.” *Autonomous Robots* 27 (1): 75–90. https://doi.org/10.1007/s10514-009-9119-x.

Kocijan, Juš, Agathe Girard, Blaž Banko, and Roderick Murray-Smith. 2005. “Dynamic Systems Identification with Gaussian Processes.” *Mathematical and Computer Modelling of Dynamical Systems* 11 (4): 411–24. https://doi.org/10.1080/13873950500068567.

Krauth, Karl, Edwin V. Bonilla, Kurt Cutajar, and Maurizio Filippone. 2016. “AutoGP: Exploring the Capabilities and Limitations of Gaussian Process Models.” In *UAI17*. http://arxiv.org/abs/1610.05392.

Kroese, Dirk P., and Zdravko I. Botev. 2013. “Spatial Process Generation,” August. http://arxiv.org/abs/1308.0399.

Lawrence, Neil. 2005. “Probabilistic Non-Linear Principal Component Analysis with Gaussian Process Latent Variable Models.” *Journal of Machine Learning Research* 6 (Nov): 1783–1816. http://www.jmlr.org/papers/v6/lawrence05a.html.

Lawrence, Neil D., and Raquel Urtasun. 2009. “Non-Linear Matrix Factorization with Gaussian Processes.” In *Proceedings of the 26th Annual International Conference on Machine Learning*, 601–8. ICML ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1553374.1553452.

Lawrence, Neil, Matthias Seeger, and Ralf Herbrich. 2003. “Fast Sparse Gaussian Process Methods: The Informative Vector Machine.” In *Proceedings of the 16th Annual Conference on Neural Information Processing Systems*, 609–16. http://papers.nips.cc/paper/2240-fast-sparse-gaussian-process-methods-the-informative-vector-machine.

Lázaro-Gredilla, Miguel, Joaquin Quiñonero-Candela, Carl Edward Rasmussen, and Aníbal R. Figueiras-Vidal. 2010. “Sparse Spectrum Gaussian Process Regression.” *Journal of Machine Learning Research* 11 (Jun): 1865–81. http://www.jmlr.org/papers/v11/lazaro-gredilla10a.

Lenk, Peter J. 2003. “Bayesian Semiparametric Density Estimation and Model Verification Using a Logistic–Gaussian Process.” *Journal of Computational and Graphical Statistics* 12 (3): 548–65. https://doi.org/10.1198/1061860032021.

Lindgren, Finn, Håvard Rue, and Johan Lindström. 2011. “An Explicit Link Between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach.” *Journal of the Royal Statistical Society: Series B (Statistical Methodology)* 73 (4): 423–98. https://doi.org/10.1111/j.1467-9868.2011.00777.x.

Liutkus, Antoine, Roland Badeau, and Gäel Richard. 2011. “Gaussian Processes for Underdetermined Source Separation.” *IEEE Transactions on Signal Processing* 59 (7): 3155–67. https://doi.org/10.1109/TSP.2011.2119315.

Lloyd, James Robert, David Duvenaud, Roger Grosse, Joshua Tenenbaum, and Zoubin Ghahramani. 2014. “Automatic Construction and Natural-Language Description of Nonparametric Regression Models.” In *Twenty-Eighth AAAI Conference on Artificial Intelligence*. http://arxiv.org/abs/1402.4304.

Louizos, Christos, Xiahan Shi, Klamer Schutte, and Max Welling. 2019. “The Functional Neural Process,” June. http://arxiv.org/abs/1906.08324.

MacKay, David J C. 1998. “Introduction to Gaussian Processes.” *NATO ASI Series. Series F: Computer and System Sciences*, 133–65. http://www.inference.phy.cam.ac.uk/mackay/gpB.pdf.

———. 2002. “Gaussian Processes.” In *Information Theory, Inference & Learning Algorithms*, Chapter 45. Cambridge University Press. http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/534.548.pdf.

Matthews, Alexander G. de G., Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas, Pablo León-Villagrá, Zoubin Ghahramani, and James Hensman. 2016. “GPflow: A Gaussian Process Library Using TensorFlow,” October. http://arxiv.org/abs/1610.08733.

Mattos, César Lincoln C., Zhenwen Dai, Andreas Damianou, Guilherme A. Barreto, and Neil D. Lawrence. 2017. “Deep Recurrent Gaussian Processes for Outlier-Robust System Identification.” *Journal of Process Control*, DYCOPS-CAB 2016, 60 (December): 82–94. https://doi.org/10.1016/j.jprocont.2017.06.010.

Mattos, César Lincoln C., Zhenwen Dai, Andreas Damianou, Jeremy Forth, Guilherme A. Barreto, and Neil D. Lawrence. 2016. “Recurrent Gaussian Processes.” In *Proceedings of ICLR*. http://arxiv.org/abs/1511.06644.

Micchelli, Charles A., and Massimiliano Pontil. 2005a. “Learning the Kernel Function via Regularization.” *Journal of Machine Learning Research* 6 (Jul): 1099–1125. http://www.jmlr.org/papers/v6/micchelli05a.html.

———. 2005b. “On Learning Vector-Valued Functions.” *Neural Computation* 17 (1): 177–204. https://doi.org/10.1162/0899766052530802.

Moreno-Muñoz, Pablo, Antonio Artés-Rodríguez, and Mauricio A. Álvarez. 2019. “Continual Multi-Task Gaussian Processes,” October. http://arxiv.org/abs/1911.00002.

Nagarajan, Sai Ganesh, Gareth Peters, and Ido Nevat. 2018. “Spatial Field Reconstruction of Non-Gaussian Random Fields: The Tukey G-and-H Random Process.” *SSRN Electronic Journal*. https://doi.org/10.2139/ssrn.3159687.

Nickisch, Hannes, Arno Solin, and Alexander Grigorevskiy. 2018. “State Space Gaussian Processes with Non-Gaussian Likelihood.” In *International Conference on Machine Learning*, 3789–98. http://proceedings.mlr.press/v80/nickisch18a.html.

Papaspiliopoulos, Omiros, Yvo Pokern, Gareth O. Roberts, and Andrew M. Stuart. 2012. “Nonparametric Estimation of Diffusions: A Differential Equations Approach.” *Biometrika* 99 (3): 511–31. https://doi.org/10.1093/biomet/ass034.

Quiñonero-Candela, Joaquin, and Carl Edward Rasmussen. 2005. “A Unifying View of Sparse Approximate Gaussian Process Regression.” *Journal of Machine Learning Research* 6 (Dec): 1939–59. http://jmlr.org/papers/volume6/quinonero-candela05a/quinonero-candela05a.pdf.

Raissi, Maziar, and George Em Karniadakis. 2017. “Machine Learning of Linear Differential Equations Using Gaussian Processes,” January. http://arxiv.org/abs/1701.02440.

Rasmussen, Carl Edward, and Christopher K. I. Williams. 2006. *Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. Cambridge, Mass: MIT Press. http://www.gaussianprocess.org/gpml/.

Reece, S., and S. Roberts. 2010. “An Introduction to Gaussian Processes for the Kalman Filter Expert.” In *2010 13th International Conference on Information Fusion*, 1–9. https://doi.org/10.1109/ICIF.2010.5711863.

Rossi, Simone, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, and Maurizio Filippone. 2020. “Rethinking Sparse Gaussian Processes: Bayesian Approaches to Inducing-Variable Approximations,” March. https://arxiv.org/abs/2003.03080v2.

Saatçi, Yunus. 2012. “Scalable Inference for Structured Gaussian Process Models.” Ph.D., University of Cambridge. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610016.

Saemundsson, Steindor, Alexander Terenin, Katja Hofmann, and Marc Peter Deisenroth. 2020. “Variational Integrator Networks for Physically Structured Embeddings,” March. http://arxiv.org/abs/1910.09349.

Salimbeni, Hugh, and Marc Deisenroth. 2017. “Doubly Stochastic Variational Inference for Deep Gaussian Processes.” In *Advances in Neural Information Processing Systems*. http://arxiv.org/abs/1705.08933.

Salimbeni, Hugh, Stefanos Eleftheriadis, and James Hensman. 2018. “Natural Gradients in Practice: Non-Conjugate Variational Inference in Gaussian Process Models.” In *International Conference on Artificial Intelligence and Statistics*, 689–97. http://proceedings.mlr.press/v84/salimbeni18a.html.

Särkkä, Simo. 2013. *Bayesian Filtering and Smoothing*. Institute of Mathematical Statistics Textbooks 3. Cambridge, U.K. ; New York: Cambridge University Press. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.461.4042&rep=rep1&type=pdf.

Särkkä, Simo, and Jouni Hartikainen. 2012. “Infinite-Dimensional Kalman Filtering Approach to Spatio-Temporal Gaussian Process Regression.” In *Artificial Intelligence and Statistics*. http://www.jmlr.org/proceedings/papers/v22/sarkka12.html.

Särkkä, Simo, A. Solin, and J. Hartikainen. 2013. “Spatiotemporal Learning via Infinite-Dimensional Bayesian Filtering and Smoothing: A Look at Gaussian Process Regression Through Kalman Filtering.” *IEEE Signal Processing Magazine* 30 (4): 51–61. https://doi.org/10.1109/MSP.2013.2246292.

Smith, Michael Thomas, Mauricio A. Alvarez, and Neil D. Lawrence. 2018. “Gaussian Process Regression for Binned Data,” September. http://arxiv.org/abs/1809.02010.

Snelson, Edward, and Zoubin Ghahramani. 2005. “Sparse Gaussian Processes Using Pseudo-Inputs.” In *Advances in Neural Information Processing Systems*, 1257–64. http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2005_543.pdf.

Tang, Wenpin, Lu Zhang, and Sudipto Banerjee. 2019. “On Identifiability and Consistency of the Nugget in Gaussian Spatial Process Models,” August. http://arxiv.org/abs/1908.05726.

Titsias, Michalis K. 2009. “Variational Learning of Inducing Variables in Sparse Gaussian Processes.” In *International Conference on Artificial Intelligence and Statistics*, 567–74. http://www.jmlr.org/proceedings/papers/v5/titsias09a/titsias09a.pdf.

Titsias, Michalis, and Neil D. Lawrence. 2010. “Bayesian Gaussian Process Latent Variable Model.” In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics*, 844–51. http://proceedings.mlr.press/v9/titsias10a.html.

Tokdar, Surya T. 2007. “Towards a Faster Implementation of Density Estimation with Logistic Gaussian Process Priors.” *Journal of Computational and Graphical Statistics* 16 (3): 633–55. https://doi.org/10.1198/106186007X210206.

Turner, Richard E., and Maneesh Sahani. 2014. “Time-Frequency Analysis as Probabilistic Inference.” *IEEE Transactions on Signal Processing* 62 (23): 6171–83. https://doi.org/10.1109/TSP.2014.2362100.

Turner, Ryan, Marc Deisenroth, and Carl Rasmussen. 2010. “State-Space Inference and Learning with Gaussian Processes.” In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics*, 868–75. http://proceedings.mlr.press/v9/turner10a.html.

Vanhatalo, Jarno, Jaakko Riihimäki, Jouni Hartikainen, Pasi Jylänki, Ville Tolvanen, and Aki Vehtari. 2013. “GPstuff: Bayesian Modeling with Gaussian Processes.” *Journal of Machine Learning Research* 14 (April): 1175−1179. http://jmlr.csail.mit.edu/papers/v14/vanhatalo13a.html.

———. 2015. “Bayesian Modeling with Gaussian Processes Using the GPstuff Toolbox,” July. http://arxiv.org/abs/1206.5754.

Walder, Christian, Kwang In Kim, and Bernhard Schölkopf. 2008. “Sparse Multiscale Gaussian Process Regression.” In *Proceedings of the 25th International Conference on Machine Learning*, 1112–9. ICML ’08. New York, NY, USA: ACM. https://doi.org/10.1145/1390156.1390296.

Walder, C., B. Schölkopf, and O. Chapelle. 2006. “Implicit Surface Modelling with a Globally Regularised Basis of Compact Support.” *Computer Graphics Forum* 25 (3): 635–44. https://doi.org/10.1111/j.1467-8659.2006.00983.x.

Wilkinson, William J., Michael Riis Andersen, Joshua D. Reiss, Dan Stowell, and Arno Solin. 2019. “End-to-End Probabilistic Inference for Nonstationary Audio Analysis,” January. https://arxiv.org/abs/1901.11436v1.

Wilk, Mark van der, Andrew G. Wilson, and Carl E. Rasmussen. 2014. “Variational Inference for Latent Variable Modelling of Correlation Structure.” In *NIPS 2014 Workshop on Advances in Variational Inference*.

Williams, Christopher KI, and Matthias Seeger. 2001. “Using the Nyström Method to Speed up Kernel Machines.” In *Advances in Neural Information Processing Systems*, 682–88. http://papers.nips.cc/paper/1866-using-the-nystrom-method-to-speed-up-kernel-machines.

Williams, Christopher, Stefan Klanke, Sethu Vijayakumar, and Kian M. Chai. 2009. “Multi-Task Gaussian Process Learning of Robot Inverse Dynamics.” In *Advances in Neural Information Processing Systems 21*, edited by D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, 265–72. Curran Associates, Inc. http://papers.nips.cc/paper/3385-multi-task-gaussian-process-learning-of-robot-inverse-dynamics.pdf.

Wilson, Andrew Gordon, and Ryan Prescott Adams. 2013. “Gaussian Process Kernels for Pattern Discovery and Extrapolation.” In *International Conference on Machine Learning*. http://arxiv.org/abs/1302.4245.

Wilson, Andrew Gordon, Christoph Dann, Christopher G. Lucas, and Eric P. Xing. 2015. “The Human Kernel,” October. http://arxiv.org/abs/1510.07389.

Wilson, Andrew Gordon, and Zoubin Ghahramani. 2011. “Generalised Wishart Processes.” In *Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence*, 736–44. UAI’11. Arlington, Virginia, United States: AUAI Press. http://dl.acm.org/citation.cfm?id=3020548.3020633.

———. 2012. “Modelling Input Varying Correlations Between Multiple Responses.” In *Machine Learning and Knowledge Discovery in Databases*, edited by Peter A. Flach, Tijl De Bie, and Nello Cristianini, 858–61. Lecture Notes in Computer Science. Springer Berlin Heidelberg.

Wilson, Andrew Gordon, David A. Knowles, and Zoubin Ghahramani. 2011. “Gaussian Process Regression Networks,” October. http://arxiv.org/abs/1110.4411.

Wilson, Andrew Gordon, and Hannes Nickisch. 2015. “Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP).” In *Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37*, 1775–84. ICML’15. Lille, France: JMLR.org. http://proceedings.mlr.press/v37/wilson15.html.

Wilson, James T., Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. 2020. “Efficiently Sampling Functions from Gaussian Process Posteriors,” July. http://arxiv.org/abs/2002.09309.

Zhang, Rui, Christian J. Walder, Edwin V. Bonilla, Marian-Andrei Rizoiu, and Lexing Xie. 2019. “Quantile Propagation for Wasserstein-Approximate Gaussian Processes,” December. https://arxiv.org/abs/1912.10200v2.