Abstract

visually demanding driving environment, where elements surrounding a driver are constantly and rapidly changing, requires a driver to make spatially large head turns. Many existing state of the art vision based head pose algorithms, however, still have difficulties in continuously monitoring the head dynamics of a driver. This occurs because, from the perspective of a single camera, spatially large head turns induce self-occlusions of facial features, which are key elements in determining head pose. In this paper, we introduce a shape feature based multi-perspective framework for continuously monitoring the driver's head dynamics. The proposed approach utilizes a distributed camera setup to observe the driver over a wide range of head movements. Using head dynamics and a confidence measure based on symmetry of facial features, a particular perspective is chosen to provide the final head pose estimate. Our analysis on real world driving data shows promising results.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/itsc.2013.6728568
http://dx.doi.org/10.1109/ITSC.2013.6728568,
https://trid.trb.org/view/1352729,
https://academic.microsoft.com/#/detail/1974094451
Back to Top

Document information

Published on 01/01/2014

Volume 2014, 2014
DOI: 10.1109/itsc.2013.6728568
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?