Random fields as stochastic differential equations

Precision vs covariance, fight!



\(\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\sinc}{\operatorname{sinc}}\)

The representation of certain random fields, especially Gaussian random fields as stochastic differential equations. This is the engine that makes filtering Gaussian processes go, and is also a natural framing for probabilistic spectral analysis.

I do not have much to say right now about this, but I am using it so watch this space.

Creating a stationary Markov SDE with desired covariance

The Gauss-Markov Random Field approach.

Warning: I’m taking crib notes for myself here, so I lazily switch between signal processing filter terminology and probabilist termonology. I assume Bochner’s and Yaglom’s Theorems as comprehensible methods for analysing covariance kernels.

Let’s start with stationary kernels. We consider an SDE \(f: \mathbb{R}\to\mathbb{R}\) at stationarity. We will let its driving noise to be some Wiener process. We care concerned with deriving the parameters of the SDE such that it has a given stationary covariance function \(k\).

If there are no zeros in the spectral density, then there are no poles in the inverse transfer function, and we can model it with an all-pole SDE. This includes all the classic Matérn functions. This is covered in J. Hartikainen and Särkkä (2010), and Lindgren, Rue, and Lindström (2011). Worked examples starting from a discrete time formulation are given in a tutorial introduction Grigorievskiy and Karhunen (2016).

More generally, (quasi-)periodic covariances have zeros and we need to find a full rational function approximation. Särkkä, Solin, and Hartikainen (2013) introduces one such method. Bolin and Lindgren (2011) explores a sligtly different class

Solin and Särkkä (2014) has a fancier method employing resonators a.k.a. filter banks, to address a concern of Steven Reece et al. (2014) that atomic spectral peaks in the Fourier transform are not well approximated by rational functions.

Bolin and Lindgren (2011) consider a general class of realisable systems, given by \[ \mathcal{L}_{1} X(\mathbf{s})=\mathcal{L}_{2} \mathcal{W}(\mathbf{s}) \] for some linear operators \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2} .\)

In the case that \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) commute, this may be put in hierarchical form: \[\begin{aligned} \mathcal{L}_{1} X_{0}(\mathbf{s})&=\mathcal{W}(\mathbf{s})\\ X(\mathbf{s})&=\mathcal{L}_{2} X_{0}(\mathbf{s}). \end{aligned}\]

They explain

\(X(\mathbf{s})\) is simply \(\mathcal{L}_{2}\) applied to the solution one would get to if \(\mathcal{L}_{2}\) was the identity operator.

They call this a nested PDE, although AFAICT you could also say ARMA. They are particularly interested in equations of this form: \[ \left(\kappa^{2}-\Delta\right)^{\alpha / 2} X(\mathbf{s})=\left(b+\mathbf{B}^{\top} \nabla\right) \mathcal{W}(\mathbf{s}) \]

The SPDE generating this class of models is \[ \left(\prod_{i=1}^{n_{1}}\left(\kappa^{2}-\Delta\right)^{\alpha_{i} / 2}\right) X(\mathbf{s})=\left(\prod_{i=1}^{n_{2}}\left(b_{i}+\mathbf{B}_{i}^{\top} \nabla\right)\right) \mathcal{W}(\mathbf{s}) \]

They show that spectral density for such an \(X(\mathbf{s})\) is given by \[ S(\mathbf{k})=\frac{\phi^{2}}{(2 \pi)^{d}} \frac{\prod_{j=1}^{n_{2}}\left(b_{j}^{2}+\mathbf{k}^{\top} \mathbf{B}_{j} \mathbf{B}_{j}^{\top} \mathbf{k}\right)}{\prod_{j=1}^{n_{1}}\left(\kappa_{j}^{2}+\|\mathbf{k}\|^{2}\right)^{\alpha_{j}}}. \]

Convolution representations

See stochastic convolution or pragmatically, assume Gaussianity and see Gaussian convolution processes.

Covariance representation

Suppose there is a linear SDE on domain \(\mathbb{R}^d\) whose measure has the desired covariance structure, and ignore all questions of existence and convergence for now. We define terms of the driving noise \(\varepsilon\) and a linear differential operator \(\mathcal{L}\) such that \[ \mathcal{L}f(\mathbf{x})=\varepsilon(\mathbf{x}). \]

Assume there is a Green’s function for the PDE, i.e. that for any \(\mathbf{s} \in\mathbb{R}^d\) we may find a function \(G_\mathbf{s}(\mathbf{x})\) such that \[ \mathcal{L}G_\mathbf{s}(\mathbf{x})=\delta_\mathbf{s}(\mathbf{x}). \]

The solutions of the SDE, ignoring a whole bunch of existence stuff, are then given by the convolution of these Green’s functions with the driving noise, i.e. \(f(\mathbf{x}_p) \overset{\text{sorta}}{=}\int G_\mathbf{s}(\mathbf{x}_p)\varepsilon(\mathbf{s}) d \mathbf{s}.\) We use this to find the covariance of the solutions in terms of inner products of these fundamental solutions. \[\begin{align*} k(\mathbf{x}_p, \mathbf{x}_q) &=\mathbb{E}[f(\mathbf{x}_p)f(\mathbf{x}_q)] \\ &=\mathbb{E}\left[\int G_\mathbf{s}(\mathbf{x}_p)\varepsilon(\mathbf{s}) d \mathbf{s} \int G_\mathbf{t}(\mathbf{x}_q)\varepsilon(\mathbf{t}) d \mathbf{t} \right] \\ &=\mathbb{E}\left[\iint G_\mathbf{s}(\mathbf{x}_p) G_\mathbf{t}(\mathbf{x}_q) \varepsilon(\mathbf{s}) \varepsilon(\mathbf{t}) d \mathbf{t} d \mathbf{s} \right] \\ &=\iint G_\mathbf{s}(\mathbf{x}_p) G_\mathbf{t}(\mathbf{x}_q) \mathbb{E}[\varepsilon(\mathbf{s}) \varepsilon(\mathbf{t})] d \mathbf{t} d \mathbf{s} \\ &=\iint G_\mathbf{s}(\mathbf{x}_p) G_\mathbf{t}(\mathbf{x}_q) \sigma^2_\varepsilon \delta_\mathbf{s} (\mathbf{t}) d \mathbf{t} d \mathbf{s} &\text{ whiteness}\\ &=\sigma^2_\varepsilon \int G_\mathbf{s}(\mathbf{x}_p) G_\mathbf{s}(\mathbf{x}_q) d \mathbf{s}\\ &=\sigma^2_\varepsilon \langle G_\cdot(\mathbf{x}_p), G_\cdot(\mathbf{x}_q)\rangle \end{align*}\]

After that, the question is, given a Greens function can you produce a linear operator that realises it?

For example, the arc-cosine kernel of order \(1\) corresponding to the ReLU is \[\begin{align*} k(\mathbf{x}_p, \mathbf{x}_q) &= \frac{\sigma_\varepsilon^2 \Vert \mathbf{x}_p \Vert \Vert \mathbf{x}_q \Vert }{2\pi} \Big( \sin |\theta| + \big(\pi - |\theta| \big) \cos\theta \Big) \end{align*}\] so for Green’s functions inducing this to exist we would want \[\begin{align} \int G_\mathbf{s}(\mathbf{x}_p) G_\mathbf{s}(\mathbf{x}_q) d \mathbf{s} &=\frac{\Vert \mathbf{x}_p \Vert \Vert \mathbf{x}_q \Vert }{2\pi} \Big( \sin |\theta| + \big(\pi - |\theta| \big) \cos\theta \Big) \end{align}\] For this to work we would need \(G_\mathbf{s}(\mathbf{x})\propto\Vert \mathbf{x} \Vert.\)

Input measures

Warning: this is just a dump of some notes from a paper I was writing; It does not make much sense RN. The essential idea I want to get at is considering different enveloping strategies for the SDE; Enveloping the input noise, for example

Suppose \(\mathbf{x}_p, \mathbf{x}_q \in \mathbb{R}^d\). The kernel satisfies \[\begin{aligned} k(\mathbf{x}_p, \mathbf{x}_q) = \sum_{j=1}^d \frac{\partial k}{\partial x_{pj}} x_{pj}. \end{aligned}\] Let \(f\) denote the Gaussian process with covariance function \(k\) and let \(\mathcal{F}_{\mu}[f]\) denote the Fourier transform of \(f\) with respect to the finite measure \(\mu\). Let the Fourier transform of \(\mu\) be denoted \(\mathcal{F}[\mu](\mathbf{\omega})=\int e^{-i \mathbf{\omega}^\top \mathbf{x}} \, \mu(\dd\mathbf{x})\), so that \(\mathcal{F}_{\mu}[f]=\mathcal{F}[f(x)\partial_x \mu(x)]=\mathcal{F}[f(x)]\ast\mathcal{F} [ \mu].\)

We have \[\begin{aligned} \mathbb{E} | \mathcal{F}_{\mu}[f](\mathbf{\omega})|^2 &= \iint \mathbb{E}\big[ f(\mathbf{x}_p) f(\mathbf{x}_q) \big]\, e^{-i\mathbf{\omega}^\top(\mathbf{x}_p - \mathbf{x}_q)} \mu(\dd\mathbf{x}_p) \mu(\dd\mathbf{x}_q) \\ &= \iint k(\mathbf{x}_p, \mathbf{x}_q) \, e^{-i\mathbf{\omega}^\top(\mathbf{x}_p - \mathbf{x}_q)}\,\mu(\dd\mathbf{x}_p) \mu(\dd\mathbf{x}_q) \\ &= \iint \sum_{j=1}^d \frac{\partial k}{\partial x_{pj}} x_{pj} e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \mu(\dd\mathbf{x}_p) \, e^{i \mathbf{\omega}^\top \mathbf{x}_q} \,\mu(\dd\mathbf{x}_q) \\ &= \int \sum_{j=1}^d i \frac{\partial}{\partial \omega_j} \Bigg( \int \frac{\partial k}{\partial x_{pj}} e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p)\Bigg) e^{i \mathbf{\omega}^\top \mathbf{x}_q} \mu(\dd\mathbf{x}_q)\\ &= \mathcal{F}_{\mu}^{\mathbf{x}_q} \left[ \sum_{j=1}^d i \frac{\partial}{\partial \omega_j} \Bigg( \int \frac{\partial k}{\partial x_{pj}} e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p)\Bigg) \right]\\ &= \mathcal{F}_{\mu}^{\mathbf{x}_q} \left[ \sum_{j=1}^d i \frac{\partial}{\partial \omega_j} \Bigg( \mathcal{F}_{\mu}^{\mathbf{x}_p}\left[ \frac{\partial k}{\partial x_{pj}} \right]\Bigg) \right].\end{aligned}\] Then \[\begin{aligned} \mathbb{E} | \mathcal{F}_{\mu}[f](\mathbf{\omega})|^2 &= \int \sum_{j=1}^d i \frac{\partial}{\partial \omega_j} \Bigg( \int \frac{\partial k}{\partial x_{pj}} e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p) \, \Bigg) e^{i \mathbf{\omega}^\top \mathbf{x}_q}\mu(\dd\mathbf{x}_q)\\ &= -\int \sum_{j=1}^d \frac{\partial}{\partial \omega_j} \Bigg( \omega_j \int k(\mathbf{x}_p, \mathbf{x}_q) e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p) \, \Bigg)e^{i \mathbf{\omega}^\top \mathbf{x}_q}\mu(\dd\mathbf{x}_q) \\ &= -\sum_{j=1}^d \int \Bigg( \int k(\mathbf{x}_p, \mathbf{x}_q) e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p) \, \Bigg) e^{i \mathbf{\omega}^\top \mathbf{x}_q}\mu(\dd\mathbf{x}_q)\\ &\phantom{{}={}}-\int \Bigg( \omega_j \frac{\partial}{\partial \omega_j} \int k(\mathbf{x}_p, \mathbf{x}_q) e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p) \, \Bigg) e^{i \mathbf{\omega}^\top \mathbf{x}_q}\mu(\dd\mathbf{x}_q) \\ (d+1)\mathbb{E} | \mathcal{F}_{\mu}[f](\mathbf{\omega})|^2 &= -\int \Bigg( \omega_j \frac{\partial}{\partial \omega_j} \int k(\mathbf{x}_p, \mathbf{x}_q) e^{-i \mathbf{\omega}^\top \mathbf{x}_p} \, \mu(\dd\mathbf{x}_p) \, \Bigg) e^{i \mathbf{\omega}^\top \mathbf{x}_q}\mu(\dd\mathbf{x}_q) \end{aligned}\]

\(\mu\) is a hypercube

We assume that \(\mu\) is invariant with respect to permutation of coordinates. If we aren’t being silly, that means a cartesian product of intervals \(I\), \(\mu(A):=\operatorname{Leb}(A\cap I^d).\) Let us go with \(I=[-1,1].\) Then \[\begin{aligned} \mathcal{F}[\mu](\mathbf{\omega}) &=\prod_{j=1}^d \sinc \left( \frac{\omega_j}{4\pi}\right)\\ &=\prod_{j=1}^d \frac{\sin (\omega_j/2)}{\omega}\end{aligned}\] Also \[\begin{aligned} \sinc'x &=\frac{\cos \pi x - \sinc x}{x}.\end{aligned}\]

\(\mu\) is the unit sphere

TBD.

\(\mu\) is an isotropic Gaussian

Suppose \(\mu\) is an isotropic Gaussian of variance \(I\sigma^2\) so that \(\dd \mu(\mathbf{x})=(2\pi)^{-d/2}\sigma^{-d}e^{-\sigma^2\mathbf{x}^\top\mathbf{x}/2}\) and \(\mathcal{F}[\mu]=e^{-\sigma^2\mathbf{\omega}^\top\mathbf{\omega}/2}=(2\pi)^{-d/2}\sigma^{-d}\dd \mu(\mathbf{\omega}).\)

Without stationarity via Green’s functions

Suppose our SDE may be specified in terms of a Gaussian white driving noise with variance \(\sigma_w^2\) and an impulse response function/Green’s function, \(g\) such that \[\begin{aligned} f(x):=\int g(\mathbf{u},\mathbf{x})\dd w(\mathbf{u}).\end{aligned}\] We know that the kernel is an inner product kernel and therefore invariant to rotation about \(\mathbf{0},\) i.e. for orthogonal \(Q\), \(k(Q\mathbf{x}_p, Q\mathbf{x}_q)=k(\mathbf{x}_p, \mathbf{x}_q).\) It follows that \(g(Q\mathbf{u}, Q\mathbf{x})=g(\mathbf{u}, \mathbf{x}).\) In fact, we may write each in dot-product form, i.e. \(k(\mathbf{x}_p, \mathbf{x}_q)=k(\mathbf{x}_p\cdot \mathbf{x}_q)\) and \(g(\mathbf{u}, \mathbf{x})=g(\mathbf{u}\cdot \mathbf{x}).\) The kernel satisfies \[\begin{aligned} k(\mathbf{x}_p, \mathbf{x}_q) &= \mathbb{E}\left[\int g(\mathbf{u},\mathbf{x}_p)\dd w(\mathbf{u})\int g(\mathbf{v},\mathbf{x}_q)\dd w(\mathbf{v})\right]\\ &= \mathbb{E}\left[\iint g(\mathbf{u},\mathbf{x}_p) g(\mathbf{v},\mathbf{x}_q)\dd w(\mathbf{u})\dd w(\mathbf{v})\right]\\ &= \iint g(\mathbf{u},\mathbf{x}_p) g(\mathbf{v},\mathbf{x}_q) \sigma_w^2\delta(\mathbf{u},\mathbf{v})\dd \mathbf{v}\dd \mathbf{u}\\ &= \sigma_w^2\int g(\mathbf{u},\mathbf{x}_p) g(\mathbf{u},\mathbf{x}_q) \dd\mathbf{u}\end{aligned}\] Up to a scaling factor, the green’s function is simply the covariance kernel under the assumption that the driving noise is white.

Recalling \(k(\mathbf{x}_p, \mathbf{x}_q) = \mathbb{E}\big[ \psi(\mathbf{W}^\top \mathbf{x}_q) \psi(\mathbf{W}^\top \mathbf{x}_p) \big]= \mathbb{E}\big[ \psi(Z_p) \psi(Z_q) \big]\) the Green’s function thus must satisfy \[\begin{aligned} \sigma_w^2\int g(\mathbf{u}\cdot\mathbf{x}_p) g(\mathbf{u}\cdot \mathbf{x}_q) \dd\mathbf{u} &= \mathbb{E}\big[ \psi(\mathbf{W}^\top \mathbf{x}_q) \psi(\mathbf{W}^\top \mathbf{x}_p) \big]\end{aligned}\] Now we need to see how this works for individual kernels.

References

Aasnaes, H., and T. Kailath. 1973. An Innovations Approach to Least-Squares Estimation–Part VII: Some Applications of Vector Autoregressive-Moving Average Models.” IEEE Transactions on Automatic Control 18 (6): 601–7.
Álvarez, Mauricio A., David Luengo, and Neil D. Lawrence. 2013. Linear Latent Force Models Using Gaussian Processes.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (11): 2693–2705.
Antoulas, Athanasios C., ed. 1991. Mathematical System Theory: The Influence of R. E. Kalman. Berlin, Heidelberg: Springer Berlin Heidelberg.
Bakka, Haakon, Håvard Rue, Geir-Arne Fuglstad, Andrea Riebler, David Bolin, Janine Illian, Elias Krainski, Daniel Simpson, and Finn Lindgren. 2018. Spatial Modeling with R-INLA: A Review.” WIREs Computational Statistics 10 (6): e1443.
Bart, H., I. Gohberg, and M. A. Kaashoek. 1979. Minimal Factorization of Matrix and Operator Functions. Vol. 1. Operator Theory, Advances and Applications, v. 1. Basel ; Boston: Birkhäuser Verlag.
Berry, Tyrus, Dimitrios Giannakis, and John Harlim. 2020. Bridging Data Science and Dynamical Systems Theory.” arXiv:2002.07928 [Physics, Stat], June.
Bolin, David. 2014. Spatial Matérn Fields Driven by Non-Gaussian Noise.” Scandinavian Journal of Statistics 41 (3): 557–79.
Bolin, David, and Kristin Kirchner. 2020. The Rational SPDE Approach for Gaussian Random Fields With General Smoothness.” Journal of Computational and Graphical Statistics 29 (2): 274–85.
Bolin, David, and Finn Lindgren. 2011. Spatial Models Generated by Nested Stochastic Partial Differential Equations, with an Application to Global Ozone Mapping.” The Annals of Applied Statistics 5 (1): 523–50.
Borovitskiy, Viacheslav, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. 2020. Matérn Gaussian Processes on Riemannian Manifolds.” arXiv:2006.10160 [Cs, Stat], June.
Bruinsma, Wessel, and Richard E. Turner. 2018. Learning Causally-Generated Stationary Time Series.” arXiv:1802.08167 [Stat], February.
Chang, Paul E, William J Wilkinson, Mohammad Emtiyaz Khan, and Arno Solin. 2020. “Fast Variational Learning in State-Space Gaussian Process Models.” In MLSP, 6.
Curtain, Ruth F. 1975. Infinite-Dimensional Filtering.” SIAM Journal on Control 13 (1): 89–104.
Dowling, Matthew, Piotr Sokół, and Il Memming Park. 2021. Hida-Matérn Kernel.” arXiv.
Dutordoir, Vincent, James Hensman, Mark van der Wilk, Carl Henrik Ek, Zoubin Ghahramani, and Nicolas Durrande. 2021. Deep Neural Networks as Point Estimates for Deep Gaussian Processes.” In arXiv:2105.04504 [Cs, Stat].
Duttweiler, D., and T. Kailath. 1973a. RKHS Approach to Detection and Estimation Problems–IV: Non-Gaussian Detection.” IEEE Transactions on Information Theory 19 (1): 19–28.
———. 1973b. RKHS Approach to Detection and Estimation Problems–V: Parameter Estimation.” IEEE Transactions on Information Theory 19 (1): 29–37.
E, Weinan. 2017. A Proposal on Machine Learning via Dynamical Systems.” Communications in Mathematics and Statistics 5 (1): 1–11.
Friedlander, B., T. Kailath, and L. Ljung. 1975. Scattering Theory and Linear Least Squares Estimation: Part II: Discrete-Time Problems.” In 1975 IEEE Conference on Decision and Control Including the 14th Symposium on Adaptive Processes, 57–58.
Gevers, M., and T. Kailath. 1973. An Innovations Approach to Least-Squares Estimation–Part VI: Discrete-Time Innovations Representations and Recursive Estimation.” IEEE Transactions on Automatic Control 18 (6): 588–600.
Grigorievskiy, Alexander, and Juha Karhunen. 2016. Gaussian Process Kernels for Popular State-Space Time Series Models.” In 2016 International Joint Conference on Neural Networks (IJCNN), 3354–63. Vancouver, BC, Canada: IEEE.
Grigorievskiy, Alexander, Neil Lawrence, and Simo Särkkä. 2017. Parallelizable Sparse Inverse Formulation Gaussian Processes (SpInGP).” In arXiv:1610.08035 [Stat].
Hartikainen, Jouni, and Simo Särkkä. 2011. “Sequential Inference for Latent Force Models.” In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, 311–18. UAI’11. Arlington, Virginia, USA: AUAI Press.
Hartikainen, Jouni, Mari Seppänen, and Simo Särkkä. 2012. State-Space Inference for Non-Linear Latent Force Models with Application to Satellite Orbit Prediction.” In Proceedings of the 29th International Coference on International Conference on Machine Learning, 723–30. ICML’12. Madison, WI, USA: Omnipress.
Hartikainen, J., and S. Särkkä. 2010. Kalman Filtering and Smoothing Solutions to Temporal Gaussian Process Regression Models.” In 2010 IEEE International Workshop on Machine Learning for Signal Processing, 379–84. Kittila, Finland: IEEE.
Higdon, Dave. 2002. Space and Space-Time Modeling Using Process Convolutions.” In Quantitative Methods for Current Environmental Issues, edited by Clive W. Anderson, Vic Barnett, Philip C. Chatwin, and Abdel H. El-Shaarawi, 37–56. London: Springer.
Higdon, David. 1998. A Process-Convolution Approach to Modelling Temperatures in the North Atlantic Ocean.” Environmental and Ecological Statistics 5 (2): 173–90.
Hildeman, Anders, David Bolin, and Igor Rychlik. 2019. Joint Spatial Modeling of Significant Wave Height and Wave Period Using the SPDE Approach.” arXiv:1906.00286 [Stat], June.
Hu, Xiangping, and Ingelin Steinsland. 2016. Spatial Modeling with System of Stochastic Partial Differential Equations.” WIREs Computational Statistics 8 (2): 112–25.
Huber, Marco F. 2014. Recursive Gaussian Process: On-Line Regression and Learning.” Pattern Recognition Letters 45 (August): 85–91.
Kailath, T. 1971a. RKHS Approach to Detection and Estimation Problems–I: Deterministic Signals in Gaussian Noise.” IEEE Transactions on Information Theory 17 (5): 530–49.
———. 1971b. A Note on Least-Squares Estimation by the Innovations Method.” In 1971 IEEE Conference on Decision and Control, 407–11.
———. 1974. A View of Three Decades of Linear Filtering Theory.” IEEE Transactions on Information Theory 20 (2): 146–81.
Kailath, T., and D. Duttweiler. 1972. An RKHS Approach to Detection and Estimation Problems– III: Generalized Innovations Representations and a Likelihood-Ratio Formula.” IEEE Transactions on Information Theory 18 (6): 730–45.
Kailath, T., and R. Geesey. 1971. An Innovations Approach to Least Squares Estimation–Part IV: Recursive Estimation Given Lumped Covariance Functions.” IEEE Transactions on Automatic Control 16 (6): 720–27.
———. 1973. An Innovations Approach to Least-Squares Estimation–Part V: Innovations Representations and Recursive Estimation in Colored Noise.” IEEE Transactions on Automatic Control 18 (5): 435–53.
Kailath, T., R. Geesey, and H. Weinert. 1972. Some Relations Among RKHS Norms, Fredholm Equations, and Innovations Representations.” IEEE Transactions on Information Theory 18 (3): 341–48.
Kailath, Thomas. 1971. “The Structure of Radon-Nikodym Derivatives with Respect to Wiener and Related Measures.” The Annals of Mathematical Statistics 42 (3): 1054–67.
Kailath, T., and H. Weinert. 1975. An RKHS Approach to Detection and Estimation Problems–II: Gaussian Signal Detection.” IEEE Transactions on Information Theory 21 (1): 15–23.
Karvonen, Toni, and Simo Särkkä. 2016. Approximate State-Space Gaussian Processes via Spectral Transformation.” In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 1–6. Vietri sul Mare, Salerno, Italy: IEEE.
Lee, Herbert KH, Dave M Higdon, Catherine A Calder, and Christopher H Holloman. 2005. Efficient Models for Correlated Data via Convolutions of Intrinsic Processes.” Statistical Modelling 5 (1): 53–74.
Lindgren, Finn, David Bolin, and Håvard Rue. 2021. The SPDE Approach for Gaussian and Non-Gaussian Fields: 10 Years and Still Running.” arXiv:2111.01084 [Stat], November.
Lindgren, Finn, and Håvard Rue. 2015. Bayesian Spatial Modelling with R-INLA.” Journal of Statistical Software 63 (i19): 1–25.
Lindgren, Finn, Håvard Rue, and Johan Lindström. 2011. An Explicit Link Between Gaussian Fields and Gaussian Markov Random Fields: The Stochastic Partial Differential Equation Approach.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73 (4): 423–98.
Ljung, L., and T. Kailath. 1976. Backwards Markovian Models for Second-Order Stochastic Processes (Corresp.).” IEEE Transactions on Information Theory 22 (4): 488–91.
Ljung, L., T. Kailath, and B. Friedlander. 1975. Scattering Theory and Linear Least Squares Estimation: Part I: Continuous-Time Problems.” In 1975 IEEE Conference on Decision and Control Including the 14th Symposium on Adaptive Processes, 55–56.
Meyer, Renate, Matthew C. Edwards, Patricio Maturana-Russel, and Nelson Christensen. 2020. Computational Techniques for Parameter Estimation of Gravitational Wave Signals.” WIREs Computational Statistics n/a (n/a): e1532.
Park, Chull. 1981. Representations of Gaussian Processes by Wiener Processes.” Pacific Journal of Mathematics 94 (2): 407–15.
Pluch, Philipp. 2007. Some Theory for the Analysis of Random Fields - With Applications to Geostatistics.” arXiv:math/0701323, January.
Rackauckas, Christopher, Yingbo Ma, Julius Martensen, Collin Warner, Kirill Zubov, Rohit Supekar, Dominic Skinner, Ali Ramadhan, and Alan Edelman. 2020. Universal Differential Equations for Scientific Machine Learning.” arXiv:2001.04385 [Cs, Math, q-Bio, Stat], August.
Reece, S., and S. Roberts. 2010. An Introduction to Gaussian Processes for the Kalman Filter Expert.” In 2010 13th International Conference on Information Fusion, 1–9.
Reece, Steven, Siddhartha Ghosh, Alex Rogers, Stephen Roberts, and Nicholas R. Jennings. 2014. Efficient State-Space Inference of Periodic Latent Force Models.” The Journal of Machine Learning Research 15 (1): 2337–97.
Rue, Håvard, and Håkon Tjelmeland. 2002. Fitting Gaussian Markov Random Fields to Gaussian Fields.” Scandinavian Journal of Statistics 29 (1): 31–49.
Särkkä, Simo. 2011. Linear Operators and Stochastic Partial Differential Equations in Gaussian Process Regression.” In Artificial Neural Networks and Machine Learning – ICANN 2011, edited by Timo Honkela, Włodzisław Duch, Mark Girolami, and Samuel Kaski, 6792:151–58. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer.
Särkkä, Simo, Mauricio A. Álvarez, and Neil D. Lawrence. 2019. Gaussian Process Latent Force Models for Learning and Stochastic Control of Physical Systems.” IEEE Transactions on Automatic Control 64 (7): 2953–60.
Särkkä, Simo, and Robert Piché. 2014. On Convergence and Accuracy of State-Space Approximations of Squared Exponential Covariance Functions.” In 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 1–6.
Särkkä, Simo, A. Solin, and J. Hartikainen. 2013. Spatiotemporal Learning via Infinite-Dimensional Bayesian Filtering and Smoothing: A Look at Gaussian Process Regression Through Kalman Filtering.” IEEE Signal Processing Magazine 30 (4): 51–61.
Särkkä, Simo, and Arno Solin. 2019. Applied Stochastic Differential Equations. Institute of Mathematical Statistics Textbooks 10. Cambridge ; New York, NY: Cambridge University Press.
Scharf, Henry R., Mevin B. Hooten, Devin S. Johnson, and John W. Durban. 2017. Process Convolution Approaches for Modeling Interacting Trajectories.” arXiv:1703.02112 [Stat], November.
Segall, A., M. Davis, and T. Kailath. 1975. Nonlinear Filtering with Counting Observations.” IEEE Transactions on Information Theory 21 (2): 143–49.
Segall, A., and T. Kailath. 1976. Orthogonal Functionals of Independent-Increment Processes.” IEEE Transactions on Information Theory 22 (3): 287–98.
Sigrist, Fabio, Hans R. Künsch, and Werner A. Stahel. 2015a. Spate : An R Package for Spatio-Temporal Modeling with a Stochastic Advection-Diffusion Process.” Application/pdf. Journal of Statistical Software 63 (14).
———. 2015b. Stochastic Partial Differential Equation Based Modelling of Large Space-Time Data Sets.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 77 (1): 3–33.
Solin, Arno. 2016. Stochastic Differential Equation Methods for Spatio-Temporal Gaussian Process Regression.” Aalto University.
Solin, Arno, and Simo Särkkä. 2013. Infinite-Dimensional Bayesian Filtering for Detection of Quasiperiodic Phenomena in Spatiotemporal Data.” Physical Review E 88 (5): 052909.
———. 2014. Explicit Link Between Periodic Covariance Functions and State Space Models.” In Artificial Intelligence and Statistics, 904–12.
———. 2020. Hilbert Space Methods for Reduced-Rank Gaussian Process Regression.” Statistics and Computing 30 (2): 419–46.
Tompkins, Anthony, and Fabio Ramos. 2018. Fourier Feature Approximations for Periodic Kernels in Time-Series Modelling.” Proceedings of the AAAI Conference on Artificial Intelligence 32 (1).
Weinert, H. L., and T. Kailath. 1974. Minimum Energy Control Using Spline Functions.” In 1974 IEEE Conference on Decision and Control Including the 13th Symposium on Adaptive Processes, 169–72.
Weinert, Howard L., and Thomas Kailath. 1974. Stochastic Interpretations and Recursive Algorithms for Spline Functions.” The Annals of Statistics 2 (4): 787–94.
Whittle, Peter. 1963. “Stochastic-Processes in Several Dimensions.” Bulletin of the International Statistical Institute 40 (2): 974–94.
Wilkinson, William J., M. Riis Andersen, J. D. Reiss, D. Stowell, and A. Solin. 2019. Unifying Probabilistic Models for Time-Frequency Analysis.” In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3352–56.
Wilkinson, William J., Simo Särkkä, and Arno Solin. 2021. Bayes-Newton Methods for Approximate Bayesian Inference with PSD Guarantees.” arXiv.
Wilson, James T, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Deisenroth. 2020. Efficiently Sampling Functions from Gaussian Process Posteriors.” In Proceedings of the 37th International Conference on Machine Learning, 10292–302. PMLR.
Wilson, James T, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. 2021. Pathwise Conditioning of Gaussian Processes.” Journal of Machine Learning Research 22 (105): 1–47.
Wolpert, R., and Katja Ickstadt. 1998. Poisson/Gamma Random Field Models for Spatial Statistics.” Biometrika 85 (2): 251–67.
Yaglom, A. M. 1987a. Correlation Theory of Stationary and Related Random Functions. Volume II: Supplementary Notes and References. Springer Series in Statistics. New York, NY: Springer Science & Business Media.
———. 1987b. Correlation Theory of Stationary and Related Random Functions Volume I. Springer-Verlag.
———. 2004. An Introduction to the Theory of Stationary Random Functions. Courier Corporation.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.