Transforming stationary processes into non-stationary ones by transforming their inputs (Sampson and Guttorp 1992; Genton 2001; Genton and Perrin 2004; Perrin and Senoussi 1999, 2000).

This is of interest in the context of composing kernels to have known desirable properties by known transforms, and also learning (somewhat) arbitrary transforms to attain stationarity.

One might consider instead processes that are stationary upon a manifold.

## Stationary reducible kernels

The main idea is to find a new feature space where stationarity (Sampson and Guttorp 1992) or local stationarity (Perrin and Senoussi 1999, 2000; Genton and Perrin 2004) can be achieved.

Genton (2001) summarises:

We say that a nonstationary kernel \(K(\mathbf{x}, \mathbf{z})\) is stationary reducible if there exist a bijective deformation \(\Phi\) such that: \[ K(\mathbf{x}, \mathbf{z})=K_{S}^{*}(\mathbf{\Phi}(\mathbf{x})-\mathbf{\Phi}(\mathbf{z})) \] where \(K_{S}^{*}\) is a stationary kernel.

## Classic deformations

### MacKay warping

### As a function of input

Invented apparently by Gibbs (1998) and generalised in Paciorek and Schervish (2003).

Let \(k_S\) be some stationary kernel on \(\mathbb{R}^D.\) Let \(\Sigma(\mathbf{x})\) be a \(D \times D\) matrix-valued function which is positive definite for all \(\mathbf{x},\) and let \(\Sigma_{i} \triangleq \Sigma\left(\mathbf{x}_{i}\right) .\) ) Then define \[ Q_{i j}=\left(\mathbf{x}_{i}-\mathbf{x}_{j}\right)^{\top}\left(\left(\Sigma_{i}+\Sigma_{j}\right) / 2\right)^{-1}\left(\mathbf{x}_{i}-\mathbf{x}_{j}\right) \] Then \[ k_{\mathrm{NS}}\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=2^{D / 2}\left|\Sigma_{i}\right|^{1 / 4}\left|\Sigma_{j}\right|^{1 / 4}\left|\Sigma_{i}+\Sigma_{j}\right|^{-1 / 2} k_{\mathrm{S}}\left(\sqrt{Q_{i j}}\right) \] is a valid non-stationary covariance function.

Homework question: Is this a product of convolutional gaussian processes.

## Learning transforms

## References

*The Annals of Statistics*36 (2): 719β41.

*The Annals of Statistics*37 (5A).

*International Conference on Machine Learning*, 541β49.

*arXiv:1709.10441 [Cs, Math]*, June.

*Environmetrics*12 (2): 161β78.

*Conference on Learning Theory*, 1647β50. PMLR.

*Journal of Machine Learning Research*2 (December): 299β312.

*Journal of Applied Probability*41 (1): 236β49.

*Advances in Neural Information Processing Systems 20*, edited by J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, 1249β56. Curran Associates, Inc.

*arXiv:1911.11992 [Math, Stat]*, March.

*Proceedings of the 16th International Conference on Neural Information Processing Systems*, 16:273β80. NIPSβ03. Cambridge, MA, USA: MIT Press.

*Statistics & Probability Letters*43 (4): 393β97.

*Statistics & Probability Letters*48 (1): 23β32.

*Gaussian Processes for Machine Learning*. Adaptive Computation and Machine Learning. Cambridge, Mass: MIT Press.

*Journal of the American Statistical Association*87 (417): 108β19.

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*65 (3): 743β58.

*The Annals of Statistics*32 (2): 656β92.

*Proceedings of the 31st International Conference on Machine Learning (ICML-14)*, 1674β82.

*Proceedings of the AAAI Conference on Artificial Intelligence*32 (1).

*Artificial Intelligence and Statistics*, 370β78. PMLR.

*Journal of the American Statistical Association*0 (0): 1β22.

## No comments yet. Why not leave one?