Optimal control


Nothing to see here; I don’t do optimal control. But here are some notes for when I thought I might.

Feedback Systems: An Introduction for Scientists and Engineers by Karl J. Åström and Richard M. Murray is an interesting control systems theory course from Caltech.

The online control blog post mentioned below has a summary

Perhaps the most fundamental setting in control theory is a LDS is with quadratic costs \(c_t\) and i.i.d Gaussian perturbations \(w_t\). The solution known as the Linear Quadratic Regulator, derived by solving the Riccati equation, is well understood and corresponds to a linear policy (i.e. the control input is a linear function of the state).

The assumption of i.i.d perturbations has been relaxed in classical control theory, with the introduction of a min-max notion, in a subfield known as \(H_{\infty}\) control. Informally, the idea behind \(H_{\infty}\) control is to design a controller which performs well against all sequences of bounded perturbations.

Nuts and bolts

Åström et al maintain a supporting python toolkit, python-control.

OPENMODELICA is an open-source Modelica-based modeling and simulation environment intended for industrial and academic usage. Its long-term development is supported by a non-profit organization — the Open Source Modelica Consortium (OSMC).

Related:

openMDAO is an open-source high-performance computing platform for systems analysis and multidisciplinary optimization, written in Python. It enables you to decompose your models, making them easier to build and maintain, while still solving them in a tightly coupled manner with efficient parallel numerical methods.

The OpenMDAO project is primarily focused on supporting gradient-based optimization with analytic derivatives to allow you to explore large design spaces with hundreds or thousands of design variables, but the framework also has a number of parallel computing features that can work with gradient-free optimization, mixed-integer nonlinear programming, and traditional design space exploration.

Online

New Methods in Control: The Gradient Perturbation Controller by Naman Agarwal, Karan Singh and Elad Hazan (Agarwal et al. 2019; Agarwal, Hazan, and Singh 2019).

Agarwal, Naman, Brian Bullins, Elad Hazan, Sham M. Kakade, and Karan Singh. 2019. “Online Control with Adversarial Disturbances.” February 22, 2019. http://arxiv.org/abs/1902.08721.

Agarwal, Naman, Elad Hazan, and Karan Singh. 2019. “Logarithmic Regret for Online Control.” September 11, 2019. http://arxiv.org/abs/1909.05062.

Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019. “CasADi: A Software Framework for Nonlinear Optimization and Optimal Control.” Mathematical Programming Computation 11 (1): 1–36. https://doi.org/10.1007/s12532-018-0139-4.

Ariyur, Kartik B., and Miroslav Krstic. 2003. Real-Time Optimization by Extremum-Seeking Control. John Wiley & Sons.

Bardi, Martino, and Italo Capuzzo-Dolcetta. 2009. Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Springer Science & Business Media.

Bertsekas, Dimitri P. 1995a. Dynamic Programming and Optimal Control Volume 1. Athena Scientific.

———. 1995b. Dynamic Programming and Optimal Control Volume 2. Athena Scientific.

Bertsekas, Dimitri P., and Steven E. Shreve. 1995. Stochastic Optimal Control: The Discrete Time Case. Athena Scientific.

Betts, John T. 2001. Practical Methods for Optimal Control Using Nonlinear Programming. Society for Industrial and Applied Mathematics. http://books.google.com?id=Yn53JcYAeaoC.

Biegler, Lorenz T. 2010. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes. SIAM. http://books.google.com?id=VdB1wJQu0sgC.

Chernousʹko, Felix L., Igor M. Ananievski, and Sergey A. Reshmin. 2008. Control of Nonlinear Dynamical Systems: Methods and Applications. Springer.

Elliott, Robert James, John B. Moore, and Lakhdar Aggoun. 1994. Hidden Markov Models: Estimation and Control. Springer.

Fleming, Wendell H., and Raymond W. Rishel. 1975. Deterministic and Stochastic Optimal Control. Springer.

Garbuno-Inigo, Alfredo, Franca Hoffmann, Wuchen Li, and Andrew M. Stuart. 2020. “Interacting Langevin Diffusions: Gradient Structure and Ensemble Kalman Sampler.” SIAM Journal on Applied Dynamical Systems 19 (1): 412–41. https://doi.org/10.1137/19M1251655.

Glad, Torkel, and Lennart Ljung. 2000. Control Theory. CRC Press.

Granger, Clive W J. 1988. “Causality, Cointegration, and Control.” Journal of Economic Dynamics and Control 12 (2-3): 551–59. https://doi.org/10.1016/0165-1889(88)90055-3.

Haddad, Wassim M, and VijaySekhar Chellaboina. 2011. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach. Princeton: Princeton University Press. http://public.eblib.com/choice/publicfullrecord.aspx?p=768552.

Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. The MIT Press.

Kappen, H. J., and H. C. Ruiz. 2016. “Adaptive Importance Sampling for Control and Inference.” Journal of Statistical Physics 162 (5): 1244–66. https://doi.org/10.1007/s10955-016-1446-7.

Kim, Jin W., and Prashant G. Mehta. 2019. “An Optimal Control Derivation of Nonlinear Smoothing Equations,” April. https://arxiv.org/abs/1904.01710v1.

Mohan, B. M., and S. K. Kar. 2012. Continuous Time Dynamical Systems: State Estimation and Optimal Control with Orthogonal Functions. CRC Press.

Nijmeijer, H, and A. J. van der Schaft. 1990. Nonlinear Dynamical Control Systems. New York: Springer-Verlag. http://catalog.hathitrust.org/api/volumes/oclc/20894240.html.

Rückert, Elmar A., and Gerhard Neumann. 2012. “Stochastic Optimal Control Methods for Investigating the Power of Morphological Computation.” Artificial Life 19 (1): 115–31. https://doi.org/10.1162/ARTL_a_00085.

Spall, James C. 2003. Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. John Wiley & Sons.

Spencer, Neil A., and Cosma Rohilla Shalizi. 2020. “Projective, Sparse, and Learnable Latent Position Network Models.” February 7, 2020. http://arxiv.org/abs/1709.09702.

Stochastic Control. 2010. Sciyo.

Taghvaei, Amirhossein, and Prashant G. Mehta. 2019. “An Optimal Transport Formulation of the Ensemble Kalman Filter,” October. https://arxiv.org/abs/1910.02338v1.