Reinforcement Learning Informed by Optimal Control
Paper in proceeding, 2019

Model-free reinforcement learning has seen tremendous advances in the last few years, however practical applications of pure reinforcement learning are still limited by sample inefficiency and the difficulty of giving robustness and stability guarantees of the proposed agents. Given access to an expert policy, one can increase sample efficiency by in addition to learning from data, and also learn from the experts actions for safer learning. In this paper we pose the question whether expert learning can be accelerated and stabilized if given access to a family of experts which are designed according to optimal control principles, and more specifically, linear quadratic regulators. In particular we consider the nominal model of a system as part of the action space of a reinforcement learning agent. Further, using the nominal controller, we design customized reward functions for training a reinforcement learning agent, and perform ablation studies on a set of simple benchmark problems.

Adaptive control

Expert learning

Optimal control

Linear quadratic control

Reinforcement learning

Online learning

Author

Magnus Önnheim

University of Gothenburg

Chalmers, Mathematical Sciences, Algebra and geometry

Pontus Andersson

Chalmers, Mathematical Sciences, Algebra and geometry

University of Gothenburg

Emil Gustavsson

University of Gothenburg

Chalmers, Mathematical Sciences

Mats Jirstrand

Chalmers, Electrical Engineering, Systems and control

University of Gothenburg

Lecture Notes in Computer Science

0302-9743 (ISSN) 16113349 (eISSN)

Vol. 11731 403-407
978-3-030-30493-5 (ISBN)

28th International Conference on Artificial Neural Networks (ICANN)
Munich, Germany,

Subject Categories

Learning

Control Engineering

Computer Science

DOI

10.1007/978-3-030-30493-5_40

More information

Latest update

6/2/2020 7