Abstract

Future intelligent environments and systems may need to interact with humans while simultaneously analyzing events and critical situations. Assistive living, advanced driver assistance systems, and intelligent command-and-control centers are just a few of these cases where human interactions play a critical role in situation analysis. In particular, the behavior or body language of the human subject may be a strong indicator of the context of the situation. In this paper we demonstrate how the interaction of a human observer's head pose and eye gaze behaviors can provide significant insight into the context of the event. Such semantic data derived from human behaviors can be used to help interpret and recognize an ongoing event. We present examples from driving and intelligent meeting rooms to support these conclusions, and demonstrate how to use these techniques to improve contextual learning.


Original document

The different versions of the original document can be found in:

https://dblp.uni-trier.de/db/conf/cvpr/cvprw2009.html#DoshiT09,
http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.ieee-000005204215,
https://academic.microsoft.com/#/detail/2123902190
http://dx.doi.org/10.1109/cvprw.2009.5204215


DOIS: 10.1109/cvpr.2009.5204215 10.1109/cvprw.2009.5204215

Back to Top

Document information

Published on 01/01/2009

Volume 2009, 2009
DOI: 10.1109/cvpr.2009.5204215
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

Keywords

claim authorship

Are you one of the authors of this document?