Rollout sampling approximate policy iteration
Journal article, 2008

Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.

Approximate policy iteration

Bandit problems

Rollouts

Reinforcement learning

Classification

Sample complexity

Author

Christos Dimitrakakis

Chalmers, Computer Science and Engineering (Chalmers), Computing Science (Chalmers)

M.G. Lagoudakis

Machine Learning

0885-6125 (ISSN) 1573-0565 (eISSN)

Vol. 72 3 157-171

Areas of Advance

Information and Communication Technology

Subject Categories

Computer and Information Science

DOI

10.1007/s10994-008-5069-3

More information

Created

10/8/2017