Tactical Decision-Making in Autonomous Driving by Reinforcement Learning with Uncertainty Estimation
Paper in proceeding, 2020

Reinforcement learning (RL) can be used to create a tactical decision-making agent for autonomous driving. However, previous approaches only output decisions and do not provide information about the agent's confidence in the recommended actions. This paper investigates how a Bayesian RL technique, based on an ensemble of neural networks with additional randomized prior functions (RPF), can be used to estimate the uncertainty of decisions in autonomous driving. A method for classifying whether or not an action should be considered safe is also introduced. The performance of the ensemble RPF method is evaluated by training an agent on a highway driving scenario. It is shown that the trained agent can estimate the uncertainty of its decisions and indicate an unacceptable level when the agent faces a situation that is far from the training distribution. Furthermore, within the training distribution, the ensemble RPF agent outperforms a standard Deep Q-Network agent. In this study, the estimated uncertainty is used to choose safe actions in unknown situations. However, the uncertainty information could also be used to identify situations that should be added to the training process.

Author

Carl-Johan E Hoel

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Engineering and Autonomous Systems

Volvo Group

Krister Wolff

Chalmers, Mechanics and Maritime Sciences (M2), Vehicle Engineering and Autonomous Systems

Leo Laine

Chalmers, Mechanics and Maritime Sciences (M2)

Volvo Group

IEEE Intelligent Vehicles Symposium, Proceedings

1563-1569 9304614

31st IEEE Intelligent Vehicles Symposium, IV 2020
Virtual, Las Vegas, USA,

Subject Categories

Other Computer and Information Science

Probability Theory and Statistics

Computer Science

DOI

10.1109/IV47402.2020.9304614

More information

Latest update

7/5/2021 7