The intersection of linear dynamical systems and stability of dynamic systems.

Related: detecting non-stationarity.

There is not much content here because I spent 2 years working on it and am too traumatised to revisit it.

Informally, I am admitting as βstableβ any dynamical system which does not explode super-polynomially fast; We can think of these as systems where if the system is not stationary then at least the rate of change might be.

Energy-preserving systems are a special case of this.

There are many problems I am interested in that touch upon this.

## Pole representations

In the univariate, discrete-time case, in discrete-time linear systems terms, these are systems that have no poles outside the unit circle, but might have poles *on* the unit circle.
In continuous time it is about systems that have no poles with positive real part.
For finitely realizable systems this boils down to tracking trigonometric roots, e.g. Megretski (2003).

In a multivariate context we might consider eigenvalues of the transfer matrix in a similar light.

van Handel (2017) for example mention the standard result that the eigenvalues of a symmetric matrix \(X\) are the roots of the characteristic polynomial \(\chi(t)=\operatorname{det}(t I-X)\) and, equivalently, the poles of the Stieltjes transform \(s(t):=\operatorname{Tr}\left[(t I-X)^{-1}\right]=\frac{d}{d t} \log \chi(t)\)

## Reparameterisation

We can use cunning reparameterisation to keep systems stable. This Betancourt podcast on Sarah Heapsβ paper (Heaps 2020) on parameterising stationarity in vector auto regressions is deep and IMO points the way to some other neat tricks in neural nets. She constructs interesting priors for this case, using some reparametrisations by Ansley and Kohn (1986).

Maybe related: Roy, Mcelroy, and Linton (2019)

## Continuous time

TBC.

## Stability and gradient descent

What if we are incrementally learning a system and wish the gradient descent steps not to push it away from stability? In such a case, we can possibly side-step the problem by using a topology which maximises system stability (Laroche 2007).

## References

*Journal of Statistical Computation and Simulation*24 (2): 99β106.

*SIAM Journal on Control and Optimization*55 (6): 4015β47.

*IEEE Transactions on Signal Processing*47 (9): 2561β67.

*SIAM Journal on Control and Optimization*55 (1): 119β55.

*Positive trigonometric polynomials and signal processing applications*. Second edition. Signals and communication technology. Cham: Springer.

*Annals of Mathematics*160 (3): 839β906.

*Convexity and Concentration*, edited by Eric Carlen, Mokshay Madiman, and Elisabeth M. Werner, 107β56. The IMA Volumes in Mathematics and Its Applications. New York, NY: Springer.

*The Journal of Machine Learning Research*19 (1): 1025β68.

*arXiv:2004.09455 [Stat]*, April.

*Algorithmic Learning Theory*, edited by Peter Auer, Alexander Clark, Thomas Zeugmann, and Sandra Zilles, 260β74. Lecture Notes in Computer Science. Bled, Slovenia: Springer International Publishing.

*Journal of the Audio Engineering Society*55 (6): 460β71.

*IEEE Signal Processing Magazine*27 (3): 50β61.

*Proceedings of the Royal Society of London*16 (January): 270β83.

*42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475)*, 4:3814β3817 vol.4.

*Perspectives in Robust Control*, 241β57. Lecture Notes in Control and Information Sciences. Springer, London.

*SIAM Review*31 (4): 586β613.

*Statistica Sinica*29 (1): 455β78.

*Automatica*49 (9): 2860β66.

*arXiv:1802.08334 [Cs, Math, Stat]*, February.

*Journal of Physics A: Mathematical and General*35 (48): 10467β501.

## No comments yet. Why not leave one?