Abstract
Driver monitoring can play an essential part in avoiding accidents by warning the driver and shifting the driver's attention to the traffic scenery in time during critical situations. This may apply for the different levels of automated driving, for take-over requests as well as for driving in manual mode. A great proxy for this purpose has always been the driver's gazing direction. The aim of this work is to introduce a robust gaze detection system. In this regard, we make several contributions that are novel in the area of gaze detection systems. In particular, we propose a deep learning approach to predict gaze regions, which is based on informative features such as eye landmarks and head pose angles of the driver. Moreover, we introduce different post-processing techniques that improve the accuracy by exploiting temporal information from videos and the availability of other vehicle signals. Last but not least, we confirm our method with a leave-one-driver-out cross-validation. Unlike previous studies, we do not use gazes to predict maneuver changes, but we consider the human-computer-interaction aspect and use vehicle signals to improve the performance of the estimation. The proposed system is able to achieve an accuracy of 92.3% outperforming earlier landmark-based gaze estimators.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Publikationsform: | Publisher's Version |
Fakultät: | Mathematik, Informatik und Statistik > Informatik > Künstliche Intelligenz und Maschinelles Lernen |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme |
Sprache: | Englisch |
Dokumenten ID: | 92517 |
Datum der Veröffentlichung auf Open Access LMU: | 09. Sep. 2022, 11:15 |
Letzte Änderungen: | 09. Sep. 2022, 11:15 |