Optimal control



This is my new test problem for the OpenAi Gym.

Nothing to see here; I don’t do optimal control. But here are some notes from when I thought I might.

Karl J. Åström and Richard M. Murray. Feedback Systems: An Introduction for Scientists and Engineers is an interesting control systems theory course from Caltech.

The online control blog post mentioned below has a summary:

Perhaps the most fundamental setting in control theory is a LDS is with quadratic costs \(c_t\) and i.i.d Gaussian perturbations \(w_t\). The solution known as the Linear Quadratic Regulator, derived by solving the Riccati equation, is well understood and corresponds to a linear policy (i.e. the control input is a linear function of the state).

The assumption of i.i.d perturbations has been relaxed in classical control theory, with the introduction of a min-max notion, in a subfield known as \(H_{\infty}\) control. Informally, the idea behind \(H_{\infty}\) control is to design a controller which performs well against all sequences of bounded perturbations.

There are some connections and dual relations to state estimation that might be worth exploring.

Nuts and bolts

Åström et al maintain a supporting python toolkit, python-control.

OPENMODELICA is an open-source Modelica-based modeling and simulation environment intended for industrial and academic usage. Its long-term development is supported by a non-profit organization — the Open Source Modelica Consortium (OSMC).

Related:

openMDAO is an open-source high-performance computing platform for systems analysis and multidisciplinary optimization, written in Python. It enables you to decompose your models, making them easier to build and maintain, while still solving them in a tightly coupled manner with efficient parallel numerical methods.

The OpenMDAO project is primarily focused on supporting gradient-based optimization with analytic derivatives to allow you to explore large design spaces with hundreds or thousands of design variables, but the framework also has a number of parallel computing features that can work with gradient-free optimization, mixed-integer nonlinear programming, and traditional design space exploration.

Online

New Methods in Control: The Gradient Perturbation Controller by Naman Agarwal, Karan Singh and Elad Hazan (Agarwal et al. 2019; Agarwal, Hazan, and Singh 2019).

what is the analogue of online learning and worst-case regret in robust control? …Our starting point for more robust control is regret minimization in games. Regret minimization is a well-accepted metric in online learning, and we consider applying it to online control.

Partially observable Markov Decision problems

See POMDP.

References

Agarwal, Naman, Brian Bullins, Elad Hazan, Sham M. Kakade, and Karan Singh. 2019. Online Control with Adversarial Disturbances.” arXiv:1902.08721 [Cs, Math, Stat], February.
Agarwal, Naman, Elad Hazan, and Karan Singh. 2019. Logarithmic Regret for Online Control.” arXiv:1909.05062 [Cs, Math, Stat], September.
Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019. CasADi: A Software Framework for Nonlinear Optimization and Optimal Control.” Mathematical Programming Computation 11 (1): 1–36.
Ariyur, Kartik B., and Miroslav Krstic. 2003. Real-Time Optimization by Extremum-Seeking Control. John Wiley & Sons.
Åström, Karl J, Richard M Murray, and EBL. 2008. Feedback systems: an introduction for scientists and engineers. Princeton: Princeton University Press.
Bardi, Martino, and Italo Capuzzo-Dolcetta. 2009. Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Springer Science & Business Media.
Bensoussan, Alain. 2018. Estimation and Control of Dynamical Systems.
Bertsekas, Dimitri P. 1995a. Dynamic Programming and Optimal Control Volume 1. Athena Scientific.
———. 1995b. Dynamic Programming and Optimal Control Volume 2. Athena Scientific.
Bertsekas, Dimitri P., and Steven E. Shreve. 1995. Stochastic Optimal Control: The Discrete Time Case. Athena Scientific.
Betts, John T. 2001. Practical Methods for Optimal Control Using Nonlinear Programming. Society for Industrial and Applied Mathematics.
Biegler, Lorenz T. 2010. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes. SIAM.
Brunton, Steven L., and Jose Nathan Kutz. 2019. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge: Cambridge University Press.
Chernousʹko, Felix L., Igor M. Ananievski, and Sergey A. Reshmin. 2008. Control of Nonlinear Dynamical Systems: Methods and Applications. Springer.
Elliott, Robert James, John B. Moore, and Lakhdar Aggoun. 1994. Hidden Markov Models: Estimation and Control. Springer.
Fleming, Wendell H., and Raymond W. Rishel. 1975. Deterministic and Stochastic Optimal Control. Springer.
Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41.
Glad, Torkel, and Lennart Ljung. 2000. Control Theory. CRC Press.
Granger, Clive W J. 1988. Causality, Cointegration, and Control.” Journal of Economic Dynamics and Control 12 (2-3): 551–59.
Haddad, Wassim M, and VijaySekhar Chellaboina. 2011. Nonlinear Dynamical Systems and Control: a Lyapunov-Based Approach. Princeton: Princeton University Press.
Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. The MIT Press.
Kappen, H. J., and H. C. Ruiz. 2016. Adaptive Importance Sampling for Control and Inference.” Journal of Statistical Physics 162 (5): 1244–66.
Kim, Jin W., and Prashant G. Mehta. 2019. An Optimal Control Derivation of Nonlinear Smoothing Equations,” April.
Maxwell, J. Clerk. 1867. On Governors.” Proceedings of the Royal Society of London 16 (January): 270–83.
Mayr, Otto. 1971. Maxwell and the Origins of Cybernetics.” Isis 62 (4): 425–44.
Mohan, B. M., and S. K. Kar. 2012. Continuous Time Dynamical Systems: State Estimation and Optimal Control With Orthogonal Functions. CRC Press.
Nijmeijer, H, and A. J. van der Schaft. 1990. Nonlinear dynamical control systems. New York: Springer-Verlag.
Rückert, Elmar A., and Gerhard Neumann. 2012. Stochastic Optimal Control Methods for Investigating the Power of Morphological Computation.” Artificial Life 19 (1): 115–31.
Šindelář, Jan, Igor Vajda, and Miroslav Kárnỳ. 2008. Stochastic Control Optimal in the Kullback Sense.” Kybernetika 44 (1): 53–60.
Spall, James C. 2003. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. John Wiley & Sons.
Spencer, Neil A., and Cosma Rohilla Shalizi. 2020. Projective, Sparse, and Learnable Latent Position Network Models.” arXiv:1709.09702 [Math, Stat], February.
Stochastic Control. 2010. Sciyo.
Taghvaei, Amirhossein, and Prashant G. Mehta. 2021. An Optimal Transport Formulation of the Ensemble Kalman Filter.” IEEE Transactions on Automatic Control 66 (7): 3052–67.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.