(Created page with " == Abstract == Awareness of the road scene is an essential component for both autonomous vehicles and Advances Driver Assistance Systems and is gaining importance both for t...")
 
m (Scipediacontent moved page Draft Content 151505871 to Cucchiara et al 2017a)
 
(No difference)

Latest revision as of 11:41, 25 January 2021

Abstract

Awareness of the road scene is an essential component for both autonomous vehicles and Advances Driver Assistance Systems and is gaining importance both for the academia and car companies. This paper presents a way to learn a semantic-aware transformation which maps detections from a dashboard camera view onto a broader bird’s eye occupancy map of the scene. To this end, a huge synthetic dataset featuring 1M couples of frames, taken from both car dashboard and bird’s eye view, has been collected and automatically annotated. A deep-network is then trained to warp detections from the first to the second view. We demonstrate the effectiveness of our model against several baselines and observe that is able to generalize on real-world data despite having been trained solely on synthetic ones.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1007/978-3-319-68560-1_21 under the license http://www.springer.com/tdm
https://link.springer.com/chapter/10.1007%2F978-3-319-68560-1_21,
https://arxiv.org/abs/1706.08442,
https://core.ac.uk/display/84093855,
https://iris.unimore.it/handle/11380/1138818,
https://www.arxiv-vanity.com/papers/1706.08442,
https://academic.microsoft.com/#/detail/2732011728
Back to Top

Document information

Published on 01/01/2017

Volume 2017, 2017
DOI: 10.1007/978-3-319-68560-1_21
Licence: Other

Document Score

0

Views 5
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?