Implementing Deep Reinforcement Learning (DRL)-based Driving Styles for Non-Player Vehicles

Authors

DOI:

https://doi.org/10.17083/ijsg.v10i4.638

Keywords:

Reinforcement Learning, Automotive Driving, Serious Games, Autonomous Agents, Racing Games, Driving Styles, Decision Making, PPO

Abstract

We propose a new, hierarchical architecture for behavioral planning of vehicle models usable as realistic non-player vehicles in serious games related to traffic and driving. These agents, trained with deep reinforcement learning (DRL), decide their motion by taking high-level decisions, such as “keep lane”, “overtake” and “go to rightmost lane”. This is similar to a driver’s high-level reasoning and takes into account the availability of advanced driving assistance systems (ADAS) in current vehicles. Compared to a low-level decision making system, our model performs better both in terms of safety and speed. As a significant advantage, the proposed approach allows to reduce the number of training steps by more than one order of magnitude. This makes the development of new models much more efficient, which is key for implementing vehicles featuring different driving styles. We also demonstrate that, by simply tweaking the reinforcement learning (RL) reward function, it is possible to train agents characterized by different driving behaviors. We also employed the continual learning technique, starting the training procedure of a more specialized agent from a base model. This allowed significantly to reduce the number of training steps while keeping similar vehicular performance figures. However, the characteristics of the specialized agents are deeply influenced by the characteristics of the baseline agent.

 

Downloads

Published

2023-11-25

Issue

Section

GaLA Conf 2022 Special Issue

How to Cite

Implementing Deep Reinforcement Learning (DRL)-based Driving Styles for Non-Player Vehicles. (2023). International Journal of Serious Games, 10(4), 153-170. https://doi.org/10.17083/ijsg.v10i4.638

Most read articles by the same author(s)

1 2 > >>