Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles -- and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting. In this workshop - motivated by the conference theme - we will look into the potential of new digital realities for concepts, visualizations, and experiments in the car, e. g., by replacing all the windows with displays or transferring the entire environment into a VR world. We are further interested in discussing novel forms of interaction (speech, gestures, gaze-based interaction) and information displays to support the driver/passenger.
The different versions of the original document can be found in:
Are you one of the authors of this document?