Mathematical optimization | Optimal control

Optimal control

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory. Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory. (Wikipedia).

Optimal control
Video thumbnail

Optimal control of spin systems with applications in (...) - D. Sugny - Workshop 2 - CEB T2 2018

Dominique Sugny (Univ. Bourgogne) / 05.06.2018 Optimal control of spin systems with applications in Magnetic Resonance Optimal control can be viewed as a generalization of the classical calculus of variations for problems with dynamical constraints. Optimal control was born in its modern

From playlist 2018 - T2 - Measurement and Control of Quantum Systems: Theory and Experiments

Video thumbnail

What Is Gain Scheduling? | Control Systems in Practice

Often, the best control system is the simplest. When the system you’re trying to control is highly nonlinear, this can lead to very complex controllers. This video continues our discussion on control systems in practice by talking about a simple form of nonlinear control: gain scheduling.

From playlist Control Systems in Practice

Video thumbnail

Data-Driven Control: The Goal of Balanced Model Reduction

In this lecture, we discuss the overarching goal of balanced model reduction: Identifying key states that are most jointly controllable and observable, to capture the most input—output energy. https://www.eigensteve.com/

From playlist Data-Driven Control with Machine Learning

Video thumbnail

Fuzzy control of inverted pendulum

Fuzzy control of inverted pendulum, State-feedback controller is designed based on T-S fuzzy model with the consideration of system stability and performance.

From playlist Demonstrations

Video thumbnail

(ML 11.8) Bayesian decision theory

Choosing an optimal decision rule under a Bayesian model. An informal discussion of Bayes rules, generalized Bayes rules, and the complete class theorems.

From playlist Machine Learning

Video thumbnail

Everything You Need to Know About Control Theory

Control theory is a mathematical framework that gives us the tools to develop autonomous systems. Walk through all the different aspects of control theory that you need to know. Some of the concepts that are covered include: - The difference between open-loop and closed-loop control - How

From playlist Control Systems in Practice

Video thumbnail

What Is Robust Control? | Robust Control, Part 1

Watch the other videos in this series: Robust Control, Part 2: Understanding Disk Margin - https://youtu.be/XazdN6eZF80 Robust Control, Part 3: Disk Margins for MIMO Systems - https://youtu.be/sac_IYBjcq0 This videos covers a high-level introduction to robust control. The goal is to get

From playlist Robust Control

Video thumbnail

What Is PID Control? | Understanding PID Control, Part 1

Chances are you’ve interacted with something that uses a form of this control law, even if you weren’t aware of it. That’s why it is worth learning a bit more about what this control law is, and how it helps. PID is just one form of feedback controller. It is the simplest type of contro

From playlist Understanding PID Control

Video thumbnail

(ML 11.4) Choosing a decision rule - Bayesian and frequentist

Choosing a decision rule, from Bayesian and frequentist perspectives. To make the problem well-defined from the frequentist perspective, some additional guiding principle is introduced such as unbiasedness, minimax, or invariance.

From playlist Machine Learning

Video thumbnail

Model Predictive Control

This lecture provides an overview of model predictive control (MPC), which is one of the most powerful and general control frameworks. MPC is used extensively in industrial control settings, and can be used with nonlinear systems and systems with constraints on the state or actuation inpu

From playlist Control Bootcamp

Video thumbnail

Learning Optimal Control with Stochastic Models of Hamiltonian Dynamics for Shape & Function Optim.

Speaker: Chandrajit Bajaj (7/25/22) Abstract: Shape and Function Optimization can be achieved through Optimal Control over infinite-dimensional search space. All optimal control problems can be solved by first applying the Pontryagin maximum principle, and then computing a solution to the

From playlist Applied Geometry for Data Sciences 2022

Video thumbnail

Optimizing HEV Models

Learn about HEV modeling and simulation. In this video, you will: - Get an introduction to optimization and learn about MATLAB® and Simulink® optimization tools. - Learn how to simultaneously optimize control and component parameters. - Find a common set of control parameters for various

From playlist Hybrid Electric Vehicles

Video thumbnail

Asymptotic analysis of a Boundary Optimal Control Problem by Abu Sufian

PROGRAM: MULTI-SCALE ANALYSIS AND THEORY OF HOMOGENIZATION ORGANIZERS: Patrizia Donato, Editha Jose, Akambadath Nandakumaran and Daniel Onofrei DATE: 26 August 2019 to 06 September 2019 VENUE: Madhava Lecture Hall, ICTS, Bangalore Homogenization is a mathematical procedure to understa

From playlist Multi-scale Analysis And Theory Of Homogenization 2019

Video thumbnail

Ivan Yegorov: "Attenuation of the curse of dimensionality in continuous-time nonlinear optimal f..."

High Dimensional Hamilton-Jacobi PDEs 2020 Workshop I: High Dimensional Hamilton-Jacobi Methods in Control and Differential Games "Attenuation of the curse of dimensionality in continuous-time nonlinear optimal feedback stabilization problems" Ivan Yegorov, North Dakota State University

From playlist High Dimensional Hamilton-Jacobi PDEs 2020

Video thumbnail

Priya Donti - Optimization-in-the-loop AI for energy and climate - IPAM at UCLA

Recorded 28 February 2023. Priya Donti of Cornell University presents "Optimization-in-the-loop AI for energy and climate" at IPAM's Artificial Intelligence and Discrete Optimization Workshop. Abstract: Addressing climate change will require concerted action across society, including the d

From playlist 2023 Artificial Intelligence and Discrete Optimization

Video thumbnail

DDPS | Differentiable Programming for Modeling and Control of Dynamical Systems by Jan Drgona

Description: In this talk, we will present a differentiable programming perspective on optimal control of dynamical systems. We introduce differentiable predictive control (DPC) as a model-based policy optimization method that systematically integrates the principles of classical model pre

From playlist Data-driven Physical Simulations (DDPS) Seminar Series

Video thumbnail

Moritz Diehl: "Convexity Exploiting Newton-Type Optimization for Learning and Control"

Intersections between Control, Learning and Optimization 2020 "Convexity Exploiting Newton-Type Optimization for Learning and Control" Moritz Diehl - University of Freiburg Abstract: This talk reviews and investigates a large class of Newton-type algorithms for nonlinear optimization tha

From playlist Intersections between Control, Learning and Optimization 2020

Video thumbnail

Julio Banga 05/11/18

Optimality principles and identification of dynamic models of biosystems

From playlist Spring 2018

Video thumbnail

Computing Limits from a Graph with Infinities

In this video I do an example of computing limits from a graph with infinities.

From playlist Limits

Related pages

Control (optimal control theory) | Differential equation | MATLAB | Dynamical system | Mathematical optimization | Trajectory optimization | PROPT | Dynamic programming | Bellman equation | Hamiltonian system | Initial condition | CasADi | Calculus of variations | Lagrange multiplier | Hamilton–Jacobi–Bellman equation | GPOPS-II | Operations research | Optimality criterion | Bellman pseudospectral method | Control theory | Stochastic control | Function (mathematics) | ASTOS | PID controller | Pontryagin's maximum principle | Gauss pseudospectral method | Pseudospectral optimal control | Collocation method | TOMLAB | DNSS point | Riccati equation | Hamiltonian (control theory) | SNOPT | Sliding mode control | Kalman filter | Shadow price | Generalized filtering | Matrix (mathematics) | Constraint (mathematics) | Controllability | JModelica.org