Abstract

Reinforcement learning (RL) constitutes a promising solution for alleviating the problem of traffic congestion. In particular, deep RL algorithms have been shown to produce adaptive traffic signal controllers that outperform conventional systems. However, in order to be reliable in highly dynamic urban areas, such controllers need to be robust with the respect to a series of exogenous sources of uncertainty. In this paper, we develop an open-source callback-based framework for promoting the flexible evaluation of different deep RL configurations under a traffic simulation environment. With this framework, we investigate how deep RL-based adaptive traffic controllers perform under different scenarios, namely under demand surges caused by special events, capacity reductions from incidents and sensor failures. We extract several key insights for the development of robust deep RL algorithms for traffic control and propose concrete designs to mitigate the impact of the considered exogenous uncertainties.

Comment: 8 page


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/itsc.2019.8917451
https://arxiv.org/abs/1904.08353,
http://arxiv.org/pdf/1904.08353.pdf,
https://academic.microsoft.com/#/detail/2991611854
https://orbit.dtu.dk/ws/files/194352259/1904.08353.pdf
  • [ ]
Back to Top

Document information

Published on 01/01/2019

Volume 2019, 2019
DOI: 10.1109/itsc.2019.8917451
Licence: CC BY-NC-SA license

Document Score

0

Views 4
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?