Practical reinforcement learning of stabilizing economic MPC
Paper in proceeding, 2019

Reinforcement Learning (RL) has demonstrated a huge potential in learning optimal policies without any prior knowledge of the process to be controlled. Model Predictive Control (MPC) is a popular control technique which is able to deal with nonlinear dynamics and state and input constraints. The main drawback of MPC is the need of identifying an accurate model, which in many cases cannot be easily obtained. Because of model inaccuracy, MPC can fail at delivering satisfactory closed-loop performance. Using RL to tune the MPC formulation or, conversely, using MPC as a function approximator in RL allows one to combine the advantages of the two techniques. This approach has important advantages, but it requires an adaptation of the existing algorithms. We therefore propose an improved RL algorithm for MPC and test it in simulations on a rather challenging example.

Author

Mario Zanon

IMT School for Advanced Studies

Sebastien Gros

Chalmers, Electrical Engineering, Systems and control

Alberto Bemporad

IMT School for Advanced Studies

2019 18th European Control Conference, ECC 2019

2258-2263 8795816
978-390714400-8 (ISBN)

18th European Control Conference, ECC 2019
Naples, Italy,

Subject Categories

Bioinformatics (Computational Biology)

Control Engineering

Computer Science

DOI

10.23919/ECC.2019.8795816

More information

Latest update

10/1/2020