Recursive identification

Learning forward dynamics by looking at time series a bit at a time

September 15, 2017 — January 3, 2024

Bayes
dynamical systems
linear algebra
probability
signal processing
state space models
statistics
time series

A grab-bag of perspectives and tricks for recursive identification of dynamical systems, i.e. updating a model which produces the correct forward predictions, given the past.

I do a lot of work with this kind of problem, and have many thoughts about it, in particular, the complexities of recursive system identification with hidden states, and a great project would be to map out the space, even connecting it to causality. That may or may not eventually happen.

Figure 1

Keywords: multi-step prediction, time horizon, teacher forcing. The various things that are meant by “autoregressive”.

A common core of ideas pop up here in forecasting and state filtering system identification (including particle version), RNNs and forward operator learning. We could describe Koopman operator as an alternative perspective.

1 Classic systems learning

Landmark papers according to Lindström et al. (2012):

Augmenting the unobserved state vector is a well known technique, used in the system identification community for decades, see e.g. Ljung (L. Ljung 1979; Lindström et al. 2008; Söderström and Stoica 1988). Similar ideas, using Sequential Monte Carlos methods, were suggested by (Kitagawa 1998; Liu and West 2001). Combined state and parameter estimation is also the standard technique for data assimilation in high-dimensional systems, see Moradkhani et al. (Evensen 2009a, 2009b; Moradkhani et al. 2005)

However, introducing random walk dynamics to the parameters with fixed variance leads to a new dynamical stochastic system with properties that may be different from the properties of the original system. That implies that the variance of the random walk should be decreased, when the method is used for offline parameter estimation, cf. (Hürzeler and Künsch 2001).

2 The pushforward trick

Figure 2

When writing Takamoto et al. (2022) we learned a useful way of thinking about this problem from Brandstetter, Worrall, and Welling (2022), which solved many difficulties at once for us. They think about it as a distribution shift problem, but one where we can reduce the magnitude of the implied distribution shift, which they call the pushforward trick.

We approach the problem in probabilistic terms. The solver maps \(p_k \mapsto\) \(\mathcal{A}_{\sharp} p_k\) at iteration \(k+1\), where \(\mathcal{A}_{\sharp}: \mathbb{P}(X) \rightarrow \mathbb{P}(X)\) is the pushforward operator for \(\mathcal{A}\) and \(\mathbb{P}(X)\) is the space of distributions on \(X\). After a single test time iteration, the solver sees samples from \(\mathcal{A}_{\sharp} p_k\) instead of the distribution \(p_{k+1}\), and unfortunately \(\mathcal{A}_{\sharp} p_k \neq p_{k+1}\) because errors always survive training. The test time distribution is thus shifted, which we refer to as the distribution shift problem. This is a domain adaptation problem. We mitigate the distribution shift problem by adding a stability loss term, accounting for the distribution shift. A natural candidate is an adversarial-style loss \[ L_{\text {stability }}=\mathbb{E}_k \mathbb{E}_{\mathbf{u}^{k+1} \mid \mathbf{u}^k, \mathbf{u}^k \sim p_k}\left[\mathbb{E}_{\boldsymbol{\epsilon} \mid \mathbf{u}^k}\left[\mathcal{L}\left(\mathcal{A}\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right), \mathbf{u}^{k+1}\right)\right]\right] \] where \(\epsilon \mid \mathbf{u}^k\) is an adversarial perturbation sampled from an appropriate distribution. For the perturbation distribution, we choose \(\epsilon\) such that \(\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right) \sim \mathcal{A}_{\sharp} p_k\). This can be easily achieved by using \(\left(\mathbf{u}^k+\boldsymbol{\epsilon}\right)=\mathcal{A}\left(\mathbf{u}^{k-1}\right)\) for \(\mathbf{u}^{k-1}\) one step causally preceding \(\mathbf{u}^k\). Our total loss is then \(L_{\text {one-step }}+L_{\text {stability. }}\) We call this the pushforward trick. We implement this by unrolling the solver for 2 steps but only backpropagating errors on the last unroll step, as shown in Figure. … This is not only faster, it also seems to be more stable. Exactly why, we are not sure, but we think it may be to ensure the perturbations are large enough. Training the adversarial distribution itself to minimize the error, defeats the purpose of using it as an adversarial distribution. Adversarial losses were also introduced in Sanchez-Gonzalez et al. (2020) and later used in Mayr et al. (2023), where Brownian motion noise is used for \(\epsilon\) and there is some similarity to Noisy Nodes (Godwin et al. 2022), where noise injection is found to stabilize training of deep graph neural networks. There are also connections with zero-stability (Hairer et all, 1993) from the ODE solver literature. Zero-stability is the condition that perturbations in the input conditions are damped out sublinearly in time, that is \(\left\|\mathcal{A}\left(\mathbf{u}^0+\epsilon\right)-\mathbf{u}^1\right\|<\kappa\|\boldsymbol{\epsilon}\|\), for appropriate norm and small \(\kappa\). The pushforward trick can be seen to minimize \(\kappa\) directly.

That is an interesting justification for a very simple trick; we train better by using a two steps forward-one step back approach, where the forward step is a pushforward of the previous step.

Figure 3: Brandstetter, Worrall, and Welling (2022)’s answer to the distribution shift problem.

3 Backpropagation through time

How we discuss learning parameters with classic recurrent neural networks (Werbos 1990, 1988).

We can think of the problem of learning recurrent networks as essentially a system identification problem with all the implied difficulties including stability problems.

RNN research has its own special terminology, e.g. vanishing/exploding gradients (Bengio, Simard, and Frasconi 1994; Pascanu, Mikolov, and Bengio 2013). TBPTT (truncated back propagation through time), (Williams and Zipser 1989) which makes explicit with respect to when gradients are taken.

4 Method of adjoints

See method of adjoints.

5 References

Aicher, Foti, and Fox. 2020. Adaptively Truncating Backpropagation Through Time to Control Gradient Bias.” In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference.
Andrieu, Doucet, and Holenstein. 2010. Particle Markov Chain Monte Carlo Methods.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Archer, Park, Buesing, et al. 2015. Black Box Variational Inference for State Space Models.” arXiv:1511.07367 [Stat].
Babtie, Kirk, and Stumpf. 2014. Topological Sensitivity Analysis for Systems Biology.” Proceedings of the National Academy of Sciences.
Bamler, and Mandt. 2017. Structured Black Box Variational Inference for Latent Time Series Models.” arXiv:1707.01069 [Cs, Stat].
Becker, Pandya, Gebhardt, et al. 2019. Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces.” In International Conference on Machine Learning.
Bengio, Simard, and Frasconi. 1994. Learning Long-Term Dependencies with Gradient Descent Is Difficult.” IEEE Transactions on Neural Networks.
Box, Jenkins, Reinsel, et al. 2016. Time Series Analysis: Forecasting and Control. Wiley Series in Probability and Statistics.
Brandstetter, Worrall, and Welling. 2022. Message Passing Neural PDE Solvers.” In International Conference on Learning Representations.
Bretó, He, Ionides, et al. 2009. Time Series Analysis via Mechanistic Models.” The Annals of Applied Statistics.
Brunton, Proctor, and Kutz. 2016. Discovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences.
Cao, Li, Petzold, et al. 2003. Adjoint Sensitivity Analysis for Differential-Algebraic Equations: The Adjoint DAE System and Its Numerical Solution.” SIAM Journal on Scientific Computing.
Chevillon. 2007. Direct Multi-Step Estimation and Forecasting.” Journal of Economic Surveys.
Chung, Kastner, Dinh, et al. 2015. A Recurrent Latent Variable Model for Sequential Data.” In Advances in Neural Information Processing Systems 28.
Corenflos, Thornton, Deligiannidis, et al. 2021. Differentiable Particle Filtering via Entropy-Regularized Optimal Transport.” arXiv:2102.07850 [Cs, Stat].
Del Moral, Doucet, and Jasra. 2006. Sequential Monte Carlo Samplers.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
———. 2011. An Adaptive Sequential Monte Carlo Method for Approximate Bayesian Computation.” Statistics and Computing.
Doucet, Freitas, and Gordon. 2001. Sequential Monte Carlo Methods in Practice.
Doucet, Jacob, and Rubenthaler. 2013. Derivative-Free Estimation of the Score Vector and Observed Information Matrix with Application to State-Space Models.” arXiv:1304.5768 [Stat].
Drovandi, Pettitt, and McCutchan. 2016. Exact and Approximate Bayesian Inference for Low Integer-Valued Time Series Models with Intractable Likelihoods.” Bayesian Analysis.
Durbin, and Koopman. 2012. Time Series Analysis by State Space Methods. Oxford Statistical Science Series 38.
Errico. 1997. What Is an Adjoint Model? Bulletin of the American Meteorological Society.
Evensen. 2003. The Ensemble Kalman Filter: Theoretical Formulation and Practical Implementation.” Ocean Dynamics.
———. 2009a. Data Assimilation - The Ensemble Kalman Filter.
———. 2009b. The Ensemble Kalman Filter for Combined State and Parameter Estimation.” IEEE Control Systems.
Fearnhead, and Künsch. 2018. Particle Filters and Data Assimilation.” Annual Review of Statistics and Its Application.
Gahungu, Lanyon, Álvarez, et al. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Godwin, Schaarschmidt, Gaunt, et al. 2022. Simple GNN Regularisation for 3D Molecular Property Prediction & Beyond.”
Heinonen, and d’Alché-Buc. 2014. Learning Nonparametric Differential Equations with Operator-Valued Kernels and Gradient Matching.” arXiv:1411.5172 [Cs, Stat].
He, Ionides, and King. 2010. Plug-and-Play Inference for Disease Dynamics: Measles in Large and Small Populations as a Case Study.” Journal of The Royal Society Interface.
Hurvich. 2002. Multistep Forecasting of Long Memory Series Using Fractional Exponential Models.” International Journal of Forecasting, Forecasting Long Memory Processes,.
Hürzeler, and Künsch. 2001. Approximating and Maximising the Likelihood for a General State-Space Model.” In Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science.
Ingraham, and Marks. 2017. Variational Inference for Sparse and Undirected Models.” In PMLR.
Innes. 2018. Don’t Unroll Adjoint: Differentiating SSA-Form Programs.” arXiv:1810.07951 [Cs].
Ionides, Edward L., Bhadra, Atchadé, et al. 2011. Iterated Filtering.” The Annals of Statistics.
Ionides, E. L., Bretó, and King. 2006. Inference for Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences.
Ionides, Edward L., Nguyen, Atchadé, et al. 2015. Inference for Dynamic and Latent Variable Models via Iterated, Perturbed Bayes Maps.” Proceedings of the National Academy of Sciences.
Johnson. 2012. Notes on Adjoint Methods for 18.335.”
Kantas, N., Doucet, Singh, et al. 2009. An Overview of Sequential Monte Carlo Methods for Parameter Estimation in General State-Space Models.” IFAC Proceedings Volumes, 15th IFAC Symposium on System Identification,.
Kantas, Nikolas, Doucet, Singh, et al. 2015. On Particle Methods for Parameter Estimation in State-Space Models.” Statistical Science.
Kidger, Chen, and Lyons. 2021. ‘Hey, That’s Not an ODE’: Faster ODE Adjoints via Seminorms.” In Proceedings of the 38th International Conference on Machine Learning.
Kidger, Morrill, Foster, et al. 2020. Neural Controlled Differential Equations for Irregular Time Series.” arXiv:2005.08926 [Cs, Stat].
Kitagawa. 1998. A Self-Organizing State-Space Model.” Journal of the American Statistical Association.
Krishnan, Shalit, and Sontag. 2015. Deep Kalman Filters.” arXiv Preprint arXiv:1511.05121.
———. 2017. Structured Inference Networks for Nonlinear State Space Models.” In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.
Lamb, Goyal, Zhang, et al. 2016. Professor Forcing: A New Algorithm for Training Recurrent Networks.” In Advances In Neural Information Processing Systems.
Laroche. 2007. On the Stability of Time-Varying Recursive Filters.” Journal of the Audio Engineering Society.
Legenstein, Naeger, and Maass. 2005. What Can a Neuron Learn with Spike-Timing-Dependent Plasticity? Neural Computation.
Le, Igl, Jin, et al. 2017. Auto-Encoding Sequential Monte Carlo.” arXiv Preprint arXiv:1705.10306.
Lele, S. R., Dennis, and Lutscher. 2007. Data Cloning: Easy Maximum Likelihood Estimation for Complex Ecological Models Using Bayesian Markov Chain Monte Carlo Methods. Ecology Letters.
Lele, Subhash R., Nadeem, and Schmuland. 2010. Estimability and Likelihood Inference for Generalized Linear Mixed Models Using Data Cloning.” Journal of the American Statistical Association.
Lillicrap, and Santoro. 2019. Backpropagation Through Time and the Brain.” Current Opinion in Neurobiology, Machine Learning, Big Data, and Neuroscience,.
Lindström, Ionides, Frydendall, et al. 2012. Efficient Iterated Filtering.” In IFAC-PapersOnLine (System Identification, Volume 16). 16th IFAC Symposium on System Identification.
Lindström, Ströjby, Brodén, et al. 2008. Sequential Calibration of Options.” Computational Statistics & Data Analysis.
Liu, and West. 2001. Combined Parameter and State Estimation in Simulation-Based Filtering.” In Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science.
Li, Wong, Chen, et al. 2020. Scalable Gradients for Stochastic Differential Equations.” In International Conference on Artificial Intelligence and Statistics.
Ljung, L. 1979. Asymptotic Behavior of the Extended Kalman Filter as a Parameter Estimator for Linear Systems.” IEEE Transactions on Automatic Control.
Ljung, Lennart, Pflug, and Walk. 2012. Stochastic Approximation and Optimization of Random Systems.
Ljung, Lennart, and Söderström. 1983. Theory and Practice of Recursive Identification. The MIT Press Series in Signal Processing, Optimization, and Control 4.
Maddison, Lawson, Tucker, et al. 2017. Filtering Variational Objectives.” arXiv Preprint arXiv:1705.09279.
Margossian, Vehtari, Simpson, et al. 2020. Hamiltonian Monte Carlo Using an Adjoint-Differentiated Laplace Approximation: Bayesian Inference for Latent Gaussian Models and Beyond.” arXiv:2004.12550 [Stat].
Mayr, Lehner, Mayrhofer, et al. 2023. Boundary Graph Neural Networks for 3D Simulations.”
Mitusch, Funke, and Dokken. 2019. Dolfin-Adjoint 2018.1: Automated Adjoints for FEniCS and Firedrake.” Journal of Open Source Software.
Moradkhani, Sorooshian, Gupta, et al. 2005. Dual State–Parameter Estimation of Hydrological Models Using Ensemble Kalman Filter.” Advances in Water Resources.
Naesseth, Linderman, Ranganath, et al. 2017. Variational Sequential Monte Carlo.” arXiv Preprint arXiv:1705.11140.
Oliva, Poczos, and Schneider. 2017. The Statistical Recurrent Unit.” arXiv:1703.00381 [Cs, Stat].
Pascanu, Mikolov, and Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks.” In arXiv:1211.5063 [Cs].
Rackauckas, Ma, Dixit, et al. 2018. A Comparison of Automatic Differentiation and Continuous Sensitivity Analysis for Derivatives of Differential Equation Solutions.” arXiv:1812.01892 [Cs].
Sanchez-Gonzalez, Godwin, Pfaff, et al. 2020. Learning to Simulate Complex Physics with Graph Networks.” In Proceedings of the 37th International Conference on Machine Learning.
Simchowitz, Boczar, and Recht. 2019. Learning Linear Dynamical Systems with Semi-Parametric Least Squares.” arXiv:1902.00768 [Cs, Math, Stat].
Sjöberg, Zhang, Ljung, et al. 1995. Nonlinear Black-Box Modeling in System Identification: A Unified Overview.” Automatica, Trends in System Identification,.
Söderström, and Stoica, eds. 1988. System Identification.
Stapor, Fröhlich, and Hasenauer. 2018. Optimization and Uncertainty Analysis of ODE Models Using 2nd Order Adjoint Sensitivity Analysis.” bioRxiv.
Sutskever. 2013. Training Recurrent Neural Networks.”
Takamoto, Praditia, Leiteritz, et al. 2022. PDEBench: An Extensive Benchmark for Scientific Machine Learning.” In.
Tallec, and Ollivier. 2017. Unbiasing Truncated Backpropagation Through Time.”
Tippett, Anderson, Bishop, et al. 2003. Ensemble Square Root Filters.” Monthly Weather Review.
Uziel. 2020. Nonparametric Sequential Prediction While Deep Learning the Kernel.” In International Conference on Artificial Intelligence and Statistics.
Wen, Torkkola, and Narayanaswamy. 2017. A Multi-Horizon Quantile Recurrent Forecaster.” arXiv:1711.11053 [Stat].
Werbos. 1988. Generalization of Backpropagation with Application to a Recurrent Gas Market Model.” Neural Networks.
———. 1990. Backpropagation Through Time: What It Does and How to Do It.” Proceedings of the IEEE.
Williams, and Peng. 1990. An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories.” Neural Computation.
Williams, and Zipser. 1989. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks.” Neural Computation.