Optimal control

June 22, 2015 — November 1, 2019

bandit problems
Bayes
dynamical systems
linear algebra
optimization
probability
sciml
signal processing
statistics
time series
Figure 1: This is my new test problem for the OpenAi Gym.

Nothing to see here; I don’t do optimal control. But here are some notes from when I thought I might.

Karl J. Åström and Richard M. Murray. Feedback Systems: An Introduction for Scientists and Engineers is an interesting control systems theory course from Caltech.

The online control blog post mentioned below has a summary:

Perhaps the most fundamental setting in control theory is an LDS with quadratic costs \(c_t\) and i.i.d Gaussian perturbations \(w_t\). The solution known as the Linear Quadratic Regulator, derived by solving the Riccati equation, is well understood and corresponds to a linear policy (i.e. the control input is a linear function of the state).

The assumption of i.i.d perturbations has been relaxed in classical control theory, with the introduction of a min-max notion, in a subfield known as \(H_{\infty}\) control. Informally, the idea behind \(H_{\infty}\) control is to design a controller which performs well against all sequences of bounded perturbations.

There are some connections and dual relations to state estimation that might be worth exploring.

1 Nuts and bolts

Figure 2

Åström et al maintain a supporting Python toolkit, python-control.

OPENMODELICA is an open-source Modelica-based modelling and simulation environment intended for industrial and academic usage. Its long-term development is supported by a non-profit organization — the Open Source Modelica Consortium (OSMC).

Figure 3

Related:

openMDAO is an open-source high-performance computing platform for systems analysis and multidisciplinary optimization, written in Python. It enables you to decompose your models, making them easier to build and maintain, while still solving them in a tightly coupled manner with efficient parallel numerical methods.

The OpenMDAO project is primarily focused on supporting gradient-based optimization with analytic derivatives to allow you to explore large design spaces with hundreds or thousands of design variables, but the framework also has a number of parallel computing features that can work with gradient-free optimization, mixed-integer nonlinear programming, and traditional design space exploration.

2 Online

Figure 4

New Methods in Control: The Gradient Perturbation Controller by Naman Agarwal, Karan Singh and Elad Hazan (Agarwal et al. 2019; Agarwal, Hazan, and Singh 2019).

what is the analogue of online learning and worst-case regret in robust control? …Our starting point for more robust control is regret minimization in games. Regret minimization is a well-accepted metric in online learning, and we consider applying it to online control.

3 Partially observable Markov Decision problems

See POMDP.

4 References

Agarwal, Bullins, Hazan, et al. 2019. Online Control with Adversarial Disturbances.” arXiv:1902.08721 [Cs, Math, Stat].
Agarwal, Hazan, and Singh. 2019. Logarithmic Regret for Online Control.” arXiv:1909.05062 [Cs, Math, Stat].
Andersson, Gillis, Horn, et al. 2019. CasADi: A Software Framework for Nonlinear Optimization and Optimal Control.” Mathematical Programming Computation.
Ariyur, and Krstic. 2003. Real-Time Optimization by Extremum-Seeking Control.
Åström, Murray, and EBL. 2008. Feedback systems: an introduction for scientists and engineers.
Bardi, and Capuzzo-Dolcetta. 2009. Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations.
Bensoussan. 2018. Estimation and Control of Dynamical Systems.
Bertsekas. 1995a. Dynamic Programming and Optimal Control Volume 1.
———. 1995b. Dynamic Programming and Optimal Control Volume 2.
Bertsekas, and Shreve. 1995. Stochastic Optimal Control: The Discrete Time Case.
Betts. 2001. Practical Methods for Optimal Control Using Nonlinear Programming.
Biegler. 2010. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes.
Brunton, and Kutz. 2019. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control.
Chernousʹko, Ananievski, and Sergey A. Reshmin. 2008. Control of Nonlinear Dynamical Systems: Methods and Applications.
Elliott, Moore, and Aggoun. 1994. Hidden Markov Models: Estimation and Control.
Fleming, and Rishel. 1975. Deterministic and Stochastic Optimal Control.
Garbuno-Inigo, Hoffmann, Li, et al. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems.
Glad, and Ljung. 2000. Control Theory.
Granger. 1988. Causality, Cointegration, and Control.” Journal of Economic Dynamics and Control.
Haddad, and Chellaboina. 2011. Nonlinear Dynamical Systems and Control: a Lyapunov-Based Approach.
Holland. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence.
Kappen, and Ruiz. 2016. Adaptive Importance Sampling for Control and Inference.” Journal of Statistical Physics.
Kim, and Mehta. 2019. An Optimal Control Derivation of Nonlinear Smoothing Equations.”
Maxwell. 1867. On Governors.” Proceedings of the Royal Society of London.
Mayr. 1971. Maxwell and the Origins of Cybernetics.” Isis.
Mohan, and Kar. 2012. Continuous Time Dynamical Systems: State Estimation and Optimal Control With Orthogonal Functions.
Nijmeijer, and Schaft. 1990. Nonlinear dynamical control systems.
Rückert, and Neumann. 2012. Stochastic Optimal Control Methods for Investigating the Power of Morphological Computation.” Artificial Life.
Šindelář, Vajda, and Kárnỳ. 2008. Stochastic Control Optimal in the Kullback Sense.” Kybernetika.
Spall. 2003. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control.
Spencer, and Shalizi. 2020. Projective, Sparse, and Learnable Latent Position Network Models.” arXiv:1709.09702 [Math, Stat].
Stochastic Control. 2010.
Taghvaei, and Mehta. 2021. An Optimal Transport Formulation of the Ensemble Kalman Filter.” IEEE Transactions on Automatic Control.