# Observability and sensitivity in learning dynamical systems

Parameter identifiability in dynamical models

November 9, 2020 — November 9, 2020

How precisely can I learn a given parameter of a dynamical system from observation? In ODE theory a useful concept is *sensitivity analysis*, which tells us how much gradient information our observations give us about a parameter. This comes in *local* (at my current estimate) and *global* (for all parameter ranges) flavours

In linear systems theory the term *observability* is used to discuss whether we can in fact identify a parameter or a latent state, which I will conflate for the current purposes.

Sometimes learning a *parameter* as such is a red herring; we in fact wish to learn an object which function of parameters, such as a transfer function, and many different parameter combinations will approximate the object similarly well. If we know that the actual object of interest *is*, we might hope to integrate out the nuisance parameters and detect sensitivity to this object itself; but maybe we do not even know that. Then what do we do?

## 1 Ergodicity

The contact between ergodic theorems and statistical identifiability.

## 2 References

*Environmental Modelling & Software*, Emulation techniques for the reduction and sensitivity analysis of complex environmental models,.

*Hydrology and Earth System Sciences*.

*arXiv:2011.03395 [Cs, Stat]*.

*Journal of Time Series Analysis*.

*arXiv:1307.2298 [q-Bio]*.

*The Journal of Machine Learning Research*.

*Uncertainty Management in Simulation-Optimization of Complex Systems: Algorithms and Applications*. Operations Research/Computer Science Interfaces Series.

*European Journal of Physics*.

*Bioinformatics*.

*Econometrica*.

*Communications in Nonlinear Science and Numerical Simulation*.

*Proceedings of the International Conference on Machine Learning*.