Abstract

In this paper, we present a deep neural network based real-time integrated framework to detect objects, lane markings, and drivable space using a monocular camera for advanced driver assistance systems. The object detection framework detects and tracks objects on the road such as cars, trucks, pedestrians, bicycles, motorcycles, and traffic signs. The lane detection framework identifies the different lane markings on the road and also distinguishes between the ego lane and adjacent lane boundaries. The free space detection framework estimates the drivable space in front of the vehicle. In our integrated framework, we propose a pipeline combining the three deep neural networks into a single framework, for object detection, lane detection, and free space detection simultaneously. The integrated framework is implemented in C++ and runs real-time on the Nvidia's Drive PX 2 platform.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/worlds4.2019.8904020
http://dx.doi.org/10.1109/worlds4.2019.8904020
https://www.narcis.nl/publication/RecordID/oai%3Apure.tue.nl%3Apublications%2Ff87b6a0e-b82e-48f5-8252-9f767365f74c,
https://academic.microsoft.com/#/detail/2991151855
Back to Top

Document information

Published on 01/01/2019

Volume 2019, 2019
DOI: 10.1109/worlds4.2019.8904020
Licence: CC BY-NC-SA license

Document Score

0

Views 52
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?