Reinforcement Learning Informed by Optimal Control
Paper i proceeding, 2019

Model-free reinforcement learning has seen tremendous advances in the last few years, however practical applications of pure reinforcement learning are still limited by sample inefficiency and the difficulty of giving robustness and stability guarantees of the proposed agents. Given access to an expert policy, one can increase sample efficiency by in addition to learning from data, and also learn from the experts actions for safer learning. In this paper we pose the question whether expert learning can be accelerated and stabilized if given access to a family of experts which are designed according to optimal control principles, and more specifically, linear quadratic regulators. In particular we consider the nominal model of a system as part of the action space of a reinforcement learning agent. Further, using the nominal controller, we design customized reward functions for training a reinforcement learning agent, and perform ablation studies on a set of simple benchmark problems.

Adaptive control

Expert learning

Optimal control

Linear quadratic control

Reinforcement learning

Online learning

Författare

Magnus Önnheim

Göteborgs universitet

Chalmers, Matematiska vetenskaper, Algebra och geometri

Pontus Andersson

Chalmers, Matematiska vetenskaper, Algebra och geometri

Göteborgs universitet

Emil Gustavsson

Göteborgs universitet

Chalmers, Matematiska vetenskaper

Mats Jirstrand

Chalmers, Elektroteknik, System- och reglerteknik

Göteborgs universitet

Lecture Notes in Computer Science

0302-9743 (ISSN) 16113349 (eISSN)

Vol. 11731 403-407
978-3-030-30493-5 (ISBN)

28th International Conference on Artificial Neural Networks (ICANN)
Munich, Germany,

Ämneskategorier

Lärande

Reglerteknik

Datavetenskap (datalogi)

DOI

10.1007/978-3-030-30493-5_40

Mer information

Senast uppdaterat

2020-06-02