Abstract

International audience; Monocular motion analysis for advanced driver assistance systems (ADAS) is a very active research topic. However, two constraints limit the implementation of existent techniques in autonomous vehicles: poorly textured regions and large displacements due to vehicle egomotion that both lead to matching ambiguities. Coarse-to-fine strategies are generally used to deal with large motion, but the lack of large texture makes this approach inefficient to estimate road relative displacement. In this paper, we propose to assist the optical flow process by exploiting both a 3D scene model and a rough velocity estimate from either other embedded sensors or egomotion estimations from the previous frames. Using the available a priori knowledge allows to compensate the dominant flow to facilitate the estimation of the remaining part by a classical optical flow method. We give results on both synthetic and real image sequences and compare our approach to other existing methods.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/icip.2013.6738796
https://ieeexplore.ieee.org/document/6738796,
https://hal.archives-ouvertes.fr/hal-00841283,
https://academic.microsoft.com/#/detail/2049698335
Back to Top

Document information

Published on 01/01/2013

Volume 2013, 2013
DOI: 10.1109/icip.2013.6738796
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?