# Recursive identification

Learning forward dynamics by looking at time series a bit at a time

September 15, 2017 — January 3, 2024

Bayes
dynamical systems
linear algebra
probability
signal processing
state space models
statistics
time series

A grab-bag of perspectives and tricks for recursive identification of dynamical systems, i.e. updating a model which produces the correct forward predictions, given the past.

I do a lot of work with this kind of problem, and have many thoughts about it, in particular, the complexities of recursive system identification with hidden states, and a great project would be to map out the space, even connecting it to causality. That may or may not eventually happen.

Keywords: multi-step prediction, time horizon, teacher forcing. The various things that are meant by “autoregressive”.

A common core of ideas pop up here in forecasting and state filtering system identification (including particle version), RNNs and forward operator learning. We could describe Koopman operator as an alternative perspective.

## 1 Classic systems learning

Landmark papers according to Lindström et al. (2012):

Augmenting the unobserved state vector is a well known technique, used in the system identification community for decades, see e.g. Ljung . Similar ideas, using Sequential Monte Carlos methods, were suggested by . Combined state and parameter estimation is also the standard technique for data assimilation in high-dimensional systems, see Moradkhani et al.

However, introducing random walk dynamics to the parameters with fixed variance leads to a new dynamical stochastic system with properties that may be different from the properties of the original system. That implies that the variance of the random walk should be decreased, when the method is used for offline parameter estimation, cf. .

## 2 The pushforward trick

When writing Takamoto et al. (2022) we learned a useful way of thinking about this problem from Brandstetter, Worrall, and Welling (2022), which solved many difficulties at once for us. They think about it as a distribution shift problem, but one where we can reduce the magnitude of the implied distribution shift, which they call the pushforward trick.

We approach the problem in probabilistic terms. The solver maps $$p_k \mapsto$$ $$\mathcal{A}_{\sharp} p_k$$ at iteration $$k+1$$, where $$\mathcal{A}_{\sharp}: \mathbb{P}(X) \rightarrow \mathbb{P}(X)$$ is the pushforward operator for $$\mathcal{A}$$ and $$\mathbb{P}(X)$$ is the space of distributions on $$X$$. After a single test time iteration, the solver sees samples from $$\mathcal{A}_{\sharp} p_k$$ instead of the distribution $$p_{k+1}$$, and unfortunately $$\mathcal{A}_{\sharp} p_k \neq p_{k+1}$$ because errors always survive training. The test time distribution is thus shifted, which we refer to as the distribution shift problem. This is a domain adaptation problem. We mitigate the distribution shift problem by adding a stability loss term, accounting for the distribution shift. A natural candidate is an adversarial-style loss $L_{\text {stability }}=\mathbb{E}_k \mathbb{E}_{\mathbf{u}^{k+1} \mid \mathbf{u}^k, \mathbf{u}^k \sim p_k}\left[\mathbb{E}_{\boldsymbol{\epsilon} \mid \mathbf{u}^k}\left[\mathcal{L}\left(\mathcal{A}\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right), \mathbf{u}^{k+1}\right)\right]\right]$ where $$\epsilon \mid \mathbf{u}^k$$ is an adversarial perturbation sampled from an appropriate distribution. For the perturbation distribution, we choose $$\epsilon$$ such that $$\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right) \sim \mathcal{A}_{\sharp} p_k$$. This can be easily achieved by using $$\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right)=\mathcal{A}\left(\mathbf{u}^{k-1}\right)$$ for $$\mathbf{u}^{k-1}$$ one step causally preceding $$\mathbf{u}^k$$. Our total loss is then $$L_{\text {one-step }}+L_{\text {stability. }}$$ We call this the pushforward trick. We implement this by unrolling the solver for 2 steps but only backpropagating errors on the last unroll step, as shown in Figure. … This is not only faster, it also seems to be more stable. Exactly why, we are not sure, but we think it may be to ensure the perturbations are large enough. Training the adversarial distribution itself to minimize the error, defeats the purpose of using it as an adversarial distribution. Adversarial losses were also introduced in Sanchez-Gonzalez et al. (2020) and later used in Mayr et al. (2023), where Brownian motion noise is used for $$\epsilon$$ and there is some similarity to Noisy Nodes , where noise injection is found to stabilize training of deep graph neural networks. There are also connections with zero-stability (Hairer et all, 1993) from the ODE solver literature. Zero-stability is the condition that perturbations in the input conditions are damped out sublinearly in time, that is $$\left\|\mathcal{A}\left(\mathbf{u}^0+\epsilon\right)-\mathbf{u}^1\right\|<\kappa\|\boldsymbol{\epsilon}\|$$, for appropriate norm and small $$\kappa$$. The pushforward trick can be seen to minimize $$\kappa$$ directly.

That is an interesting justification for a very simple trick; we train better by using a two steps forward-one step back approach, where the forward step is a pushforward of the previous step.

## 3 Backpropagation through time

How we discuss learning parameters with classic recurrent neural networks .

We can think of the problem of learning recurrent networks as essentially a system identification problem with all the implied difficulties including stability problems.

RNN research has its own special terminology, e.g. vanishing/exploding gradients . TBPTT (truncated back propagation through time), which makes explicit with respect to when gradients are taken.

## 5 References

Aicher, Foti, and Fox. 2020. In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference.
Andrieu, Doucet, and Holenstein. 2010. Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Archer, Park, Buesing, et al. 2015. arXiv:1511.07367 [Stat].
Babtie, Kirk, and Stumpf. 2014. Proceedings of the National Academy of Sciences.
Bamler, and Mandt. 2017. arXiv:1707.01069 [Cs, Stat].
Becker, Pandya, Gebhardt, et al. 2019. In International Conference on Machine Learning.
Bengio, Simard, and Frasconi. 1994. IEEE Transactions on Neural Networks.
Box, Jenkins, Reinsel, et al. 2016. Time Series Analysis: Forecasting and Control. Wiley Series in Probability and Statistics.
Brandstetter, Worrall, and Welling. 2022. In International Conference on Learning Representations.
Bretó, He, Ionides, et al. 2009. The Annals of Applied Statistics.
Brunton, Proctor, and Kutz. 2016. Proceedings of the National Academy of Sciences.
Cao, Li, Petzold, et al. 2003. SIAM Journal on Scientific Computing.
Chevillon. 2007. Journal of Economic Surveys.
Chung, Kastner, Dinh, et al. 2015. In Advances in Neural Information Processing Systems 28.
Corenflos, Thornton, Deligiannidis, et al. 2021. arXiv:2102.07850 [Cs, Stat].
Del Moral, Doucet, and Jasra. 2006. Journal of the Royal Statistical Society: Series B (Statistical Methodology).
———. 2011. Statistics and Computing.
Doucet, Freitas, and Gordon. 2001. Sequential Monte Carlo Methods in Practice.
Doucet, Jacob, and Rubenthaler. 2013. arXiv:1304.5768 [Stat].
Drovandi, Pettitt, and McCutchan. 2016. Bayesian Analysis.
Durbin, and Koopman. 2012. Time Series Analysis by State Space Methods. Oxford Statistical Science Series 38.
Errico. 1997. Bulletin of the American Meteorological Society.
Evensen. 2003. Ocean Dynamics.
———. 2009a. Data Assimilation - The Ensemble Kalman Filter.
———. 2009b. IEEE Control Systems.
Fearnhead, and Künsch. 2018. Annual Review of Statistics and Its Application.
Gahungu, Lanyon, Álvarez, et al. 2022. In.
Godwin, Schaarschmidt, Gaunt, et al. 2022.
Heinonen, and d’Alché-Buc. 2014. arXiv:1411.5172 [Cs, Stat].
He, Ionides, and King. 2010. Journal of The Royal Society Interface.
Hurvich. 2002. International Journal of Forecasting, Forecasting Long Memory Processes,.
Hürzeler, and Künsch. 2001. In Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science.
Ingraham, and Marks. 2017. In PMLR.
Innes. 2018. arXiv:1810.07951 [Cs].
Ionides, Edward L., Bhadra, Atchadé, et al. 2011. The Annals of Statistics.
Ionides, E. L., Bretó, and King. 2006. Proceedings of the National Academy of Sciences.
Ionides, Edward L., Nguyen, Atchadé, et al. 2015. Proceedings of the National Academy of Sciences.
Johnson. 2012.
Kantas, N., Doucet, Singh, et al. 2009. IFAC Proceedings Volumes, 15th IFAC Symposium on System Identification,.
Kantas, Nikolas, Doucet, Singh, et al. 2015. Statistical Science.
Kidger, Chen, and Lyons. 2021. In Proceedings of the 38th International Conference on Machine Learning.
Kidger, Morrill, Foster, et al. 2020. arXiv:2005.08926 [Cs, Stat].
Kitagawa. 1998. Journal of the American Statistical Association.
Krishnan, Shalit, and Sontag. 2015. arXiv Preprint arXiv:1511.05121.
———. 2017. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Lamb, Goyal, Zhang, et al. 2016. In Advances In Neural Information Processing Systems.
Laroche. 2007. Journal of the Audio Engineering Society.
Legenstein, Naeger, and Maass. 2005. Neural Computation.
Le, Igl, Jin, et al. 2017. arXiv Preprint arXiv:1705.10306.
Lele, S. R., Dennis, and Lutscher. 2007. Ecology Letters.
Lele, Subhash R., Nadeem, and Schmuland. 2010. Journal of the American Statistical Association.
Lillicrap, and Santoro. 2019. Current Opinion in Neurobiology, Machine Learning, Big Data, and Neuroscience,.
Lindström, Ionides, Frydendall, et al. 2012. In IFAC-PapersOnLine (System Identification, Volume 16). 16th IFAC Symposium on System Identification.
Lindström, Ströjby, Brodén, et al. 2008. Computational Statistics & Data Analysis.
Liu, and West. 2001. In Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science.
Li, Wong, Chen, et al. 2020. In International Conference on Artificial Intelligence and Statistics.
Ljung, L. 1979. IEEE Transactions on Automatic Control.
Ljung, Lennart, Pflug, and Walk. 2012. Stochastic Approximation and Optimization of Random Systems.
Ljung, Lennart, and Söderström. 1983. Theory and Practice of Recursive Identification. The MIT Press Series in Signal Processing, Optimization, and Control 4.
Maddison, Lawson, Tucker, et al. 2017. arXiv Preprint arXiv:1705.09279.
Margossian, Vehtari, Simpson, et al. 2020. arXiv:2004.12550 [Stat].
Mayr, Lehner, Mayrhofer, et al. 2023.
Mitusch, Funke, and Dokken. 2019. Journal of Open Source Software.
Naesseth, Linderman, Ranganath, et al. 2017. arXiv Preprint arXiv:1705.11140.
Oliva, Poczos, and Schneider. 2017. arXiv:1703.00381 [Cs, Stat].
Pascanu, Mikolov, and Bengio. 2013. In arXiv:1211.5063 [Cs].
Rackauckas, Ma, Dixit, et al. 2018. arXiv:1812.01892 [Cs].
Sanchez-Gonzalez, Godwin, Pfaff, et al. 2020. In Proceedings of the 37th International Conference on Machine Learning.
Simchowitz, Boczar, and Recht. 2019. arXiv:1902.00768 [Cs, Math, Stat].
Sjöberg, Zhang, Ljung, et al. 1995. Automatica, Trends in System Identification,.
Söderström, and Stoica, eds. 1988. System Identification.
Stapor, Fröhlich, and Hasenauer. 2018. bioRxiv.
Sutskever. 2013.
Takamoto, Praditia, Leiteritz, et al. 2022. In.
Tallec, and Ollivier. 2017.
Tippett, Anderson, Bishop, et al. 2003. Monthly Weather Review.
Uziel. 2020. In International Conference on Artificial Intelligence and Statistics.
Wen, Torkkola, and Narayanaswamy. 2017. arXiv:1711.11053 [Stat].
Werbos. 1988. Neural Networks.
———. 1990. Proceedings of the IEEE.
Williams, and Peng. 1990. Neural Computation.
Williams, and Zipser. 1989. Neural Computation.