Modern control theory
Lecturer
Xiaopei Jiao (焦小沛, Assistant Professor, BIMSA)
Date
2025-09-03 ~ 2025-12-31
Schedule
Weekday Time Venue Online ID Password
Wed 14:20-16:55 A3-1-101 Zoom 242 742 6089 BIMSA
Prerequisite
Linear algebra, probabilistic theory, calculus, optimization
Introduction
This course explores the deep connections between optimal control and reinforcement learning, bridging classical techniques (Dynamic Programming, LQR, MPC) with modern data-driven methods (Q-Learning, Policy Gradient, Deep RL). Students will learn: Mathematical foundations (Bellman equations, value/policy iteration); Optimal control (LQR, LQG, Model Predictive Control); Approximate DP & RL (Monte Carlo, TD Learning, Actor-Critic methods); Applications in robotics, autonomous systems, and finance. The course balances theory (convergence, stability) and implementation (Python examples).
Syllabus
Foundations of Optimal Control & Exact DP
1: Introduction to Dynamic Programming
2: Deterministic Continuous-Time Optimal Control
3: Stochastic DP and the LQG Problem
4: Model Predictive Control (MPC)
5: Infinite Horizon Problems
6: Shortest Path Problems & Computational Methods
Approximate DP & RL Basics
7: Approximate Value Iteration
8: Monte Carlo & Temporal Difference Learning
9: Policy Gradient Methods
10: Approximate Policy Iteration
Advanced Topics
11: Robust DP and H infinity Control
12: Multiagent RL and Games
13: Inverse Reinforcement Learning
14: Deep Reinforcement Learning
Reference
Bertsekas, D. P. (2019). Reinforcement Learning and Optimal Control. Athena Scientific.
Sutton & Barto, Reinforcement Learning: An Introduction
Bruno C. da Silva, Reinforcement Learning Lectures Notes (Fall 2022)
Video Public
Yes
Notes Public
Yes
--
FROM 202.120.11.*