Practical reinforcement learning of stabilizing economic MPC
Paper i proceeding, 2019

Reinforcement Learning (RL) has demonstrated a huge potential in learning optimal policies without any prior knowledge of the process to be controlled. Model Predictive Control (MPC) is a popular control technique which is able to deal with nonlinear dynamics and state and input constraints. The main drawback of MPC is the need of identifying an accurate model, which in many cases cannot be easily obtained. Because of model inaccuracy, MPC can fail at delivering satisfactory closed-loop performance. Using RL to tune the MPC formulation or, conversely, using MPC as a function approximator in RL allows one to combine the advantages of the two techniques. This approach has important advantages, but it requires an adaptation of the existing algorithms. We therefore propose an improved RL algorithm for MPC and test it in simulations on a rather challenging example.

Författare

Mario Zanon

IMT Alti Studi Lucca

Sebastien Gros

Chalmers, Elektroteknik, System- och reglerteknik

Alberto Bemporad

IMT Alti Studi Lucca

2019 18th European Control Conference, ECC 2019

2258-2263 8795816
978-390714400-8 (ISBN)

18th European Control Conference, ECC 2019
Naples, Italy,

Ämneskategorier

Bioinformatik (beräkningsbiologi)

Reglerteknik

Datavetenskap (datalogi)

DOI

10.23919/ECC.2019.8795816

Mer information

Senast uppdaterat

2020-10-01