Adaptive Autopilot: Constrained Drl for Diverse Driving Behaviors
Paper in proceeding, 2024

In pursuit of autonomous vehicles, achieving human-like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves (i) extracting data from the highD natural driving study and categorizing it into three driving styles using a rule-based classifier; (ii) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; and (iii) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for humanlike driving across styles.

Author

Dinesh Cyril Selvaraj

Polytechnic University of Turin

Christian Vitale

University of Cyprus

Tania Panayiotou

University of Cyprus

Panayiotis Kolios

University of Cyprus

Carla Fabiana Chiasserini

Polytechnic University of Turin

Network and Systems

Georgios Ellinas

University of Cyprus

IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC

21530009 (ISSN) 21530017 (eISSN)

3383-3390
9798331505929 (ISBN)

27th IEEE International Conference on Intelligent Transportation Systems, ITSC 2024
Edmonton, Canada,

Subject Categories (SSIF 2025)

Computer Sciences

Control Engineering

DOI

10.1109/ITSC58415.2024.10920172

More information

Latest update

4/15/2025