# Gaussian Process regression via state filtering

## Imposing time on things 🏗️🏗️🏗️ Under heavy construction 🏗️🏗️🏗️

Classic flavours together, Gaussian processes and state filters/ stochastic differential equations and random fields as stochastic differential equations.

I am interested here in the trick which makes certain Gaussian process regression problems soluble by making them local, i.e. Markov, with respect to some assumed hidden state, in the same way Kalman filtering does Wiener filtering. This means you get to solve a GP as an SDE using a state filter.

Not covered here, another concept which includes the same keywords but is distinct: using Gaussian processes to define state process dynamics or observation distribution.

The GP-filtering trick is explained in an intro article in , based on various precedents , possible also . Aside: is an incredible paper that invented several research areas at once (GP regression, surrogate models for experiment design as well as this) and AFAICT no one noticed at the time. Also Whittle did some foundational work here, but I cannot find the original paper to read it.

The idea is that if your GP covariance kernel is (or can be well approximated by) a rational function then it is possible to factorise it into a tractable state space model, using a duality between random fields and stochastic differential equations. That sounds simple enough conceptually; I wonder about the practice. Of course, when you want some complications, such as non-stationary kernels or hierarchical models, this state space inference trick gets more complicated, and posterior distributions are no longer so simple. But possibly it can still go. (This is a research interest of mine.)

introduces a computational toolkit and many worked examples of inference algorithms. looks like it might be solving a similar problem but I do not yet understand their framing.

This complements, perhaps, the trick of fast Gaussian process calculations on lattices.

tries to introduce a vocabulary for inference based on this insight, by discussing it in terms of computational primitives

In time-series data, with D = 1, the data sets tend to become long (or unbounded) when observations accumulate over time. For these time-series models, leveraging sequential state space methods from signal processing makes it possible to solve GP inference problems in linear time complexity O(n) if the underlying GP has Markovian structure . This reformulation is exact for Markovian covariance functions (see, e.g., ) such as the exponential, half-integer Matérn, noise, constant, linear, polynomial, Wiener, etc. (and their sums and products).…

While existing literature has focused on the connection between GP regression and state space methods, the computational primitives allowing for inference using general likelihoods in combination with the Laplace approximation (LA), variational Bayes (VB), and assumed density filtering (ADF, a.k.a. single-sweep expectation propagation, EP) schemes has been largely overlooked.… We present a unifying framework for solving computational primitives for non-Gaussian inference schemes in the state space setting, thus directly enabling inference to be done through LA, VB, KL, and ADF/EP.

The following computational primitives allow to cast the covariance approximation in more generic terms: $$\begin{array}{llll}\text { 1. Linear } & \text { system } & \text { with } & \text { "regularized" } & \text { covariance: }\end{array}$$ $\text { solve }_{\mathbf{K}}(\mathbf{W}, \mathbf{r}):=\left(\mathbf{K}+\mathbf{W}^{-1}\right)^{-1} \mathbf{r}$ 2. Matrix-vector multiplications: $$\operatorname{mvm}_{\mathbf{K}}(\mathbf{r}):=\mathbf{K r}$$. For learning we also need $$\frac{\operatorname{mvm}_{K}(\mathbf{r})}{\partial \theta}$$. 3. Log-determinants: $$\operatorname{ld}_{\mathbf{K}}(\mathbf{W}):=\log |\mathbf{B}|$$ with symmet- ric and well-conditioned $$\mathbf{B}=\mathbf{I}+\mathbf{W}^{\frac{1}{2}} \mathbf{K} \mathbf{W}^{\frac{1}{2}}$$. For learning, we need derivatives: $$\frac{\partial \operatorname{ld} \mathbf{K}(\mathbf{W})}{\partial \boldsymbol{\theta}}, \frac{\partial \operatorname{ld} \mathbf{K}(\mathbf{W})}{\partial \mathbf{W}}$$ 4. Predictions need latent mean $$\mathbb{E}\left[f_{*}\right]$$ and variance $$\mathbb{V}\left[f_{*}\right]$$. Using these primitives, GP regression can be compactly written as $$\mathbf{W}=\mathbf{I} / \sigma_{n}^{2}, \boldsymbol{\alpha}=\operatorname{solve}_{\mathbf{K}}(\mathbf{W}, \mathbf{y}-\mathbf{m}),$$ and $$\log Z_{\mathrm{GPR}}=$$ $-\frac{1}{2}\left[\boldsymbol{\alpha}^{\top} \mathrm{mvm}_{\mathrm{K}}(\boldsymbol{\alpha})+\mathrm{ld}_{\mathrm{K}}(\mathbf{W})+n \log \left(2 \pi \sigma_{n}^{2}\right)\right]$ Approximate inference $$(\mathrm{LA}, \mathrm{VB}, \mathrm{KL}, \mathrm{ADF} / \mathrm{EP})-$$ in case of non-Gaussian likelihoods - requires these primitives as necessary building blocks. Depending on the covariance approximation method e.g. exact, sparse, grid-based, or state space, the four primitives differ in their implementation and computational complexity.

Recent works I should also inspect include .

## Spatio-temporal usage

Solving PDEs by filtering!

## Latent force models

I am going to argue that sme latent force models fit here, if I ever get time to define them .

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.