Abstract

Designing efficient traffic signal controllers has always been an important concern in traffic engineering. This is owing to the complex and uncertain nature of traffic environments. Within such a context, reinforcement learning has been one of the most successful methods owing to its adaptability and its online learning ability. Reinforcement learning provides traffic signals with the ability automatically to determine the ideal behaviour for achieving their objective (alleviating traffic congestion). In fact, traffic signals based on reinforcement learning are able to learn and react flexibly to different traffic situations without the need of a predefined model of the environment. In this research, the actor-critic method is used for adaptive traffic signal control (ATSC-AC). Actor-critic has the advantages of both actor-only and critic-only methods. One of the most important issues in reinforcement learning is the trade-off between exploration of the traffic environment and exploitation of the knowledge already obtained. In order to tackle this challenge, two direct exploration methods are adapted to traffic signal control and compared with two indirect exploration methods. The results reveal that ATSC-ACs based on direct exploration methods have the best performance and they consistently outperform a fixed-time controller, reducing average travel time by 21%.


Original document

The different versions of the original document can be found in:

Back to Top

Document information

Published on 01/01/2019

Volume 2019, 2019
DOI: 10.1680/jtran.17.00085
Licence: CC BY-NC-SA license

Document Score

0

Views 3
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?