Abstract

International audience; As mobility grow in urban cities, traffic congestion become more frequent and troublesome. traffic signal is one way to decrease traffic congestion in urban areas but needs to be adjusted in order to take into account the stochasticity of traffic. Reinforcement learning (RL) has been the object of investigation of many recent papers as a promising approach to control such a stochastic environment. The goal of this paper is to analyze the feasibility of RL, particularly the use of Q-learning algorithm for adaptive traffic signal control in different traffic dynamics. A RL control was developed for an isolated multi-phase intersection using a microscopic traffic simulator known as Paramics. The novelty of this work consists of its methodology which uses a new generalized state space with different known reward definitions. The results of this study demonstrate the advantage of using RL over fixed signal plan, and yet exhibit different outcomes depending on the reward definitions and different traffic dynamics being considered.


Original document

The different versions of the original document can be found in:

https://api.elsevier.com/content/article/PII:S1877050917309912?httpAccept=text/plain,
http://dx.doi.org/10.1016/j.procs.2017.05.327 under the license https://www.elsevier.com/tdm/userlicense/1.0/
https://dblp.uni-trier.de/db/conf/ant/ant2017.html#TouhbiBNMHCS17,
https://academic.microsoft.com/#/detail/2625186350
Back to Top

Document information

Published on 01/01/2017

Volume 2017, 2017
DOI: 10.1016/j.procs.2017.05.327
Licence: Other

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?