Abstract
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Sprach- und Literaturwissenschaften > Department 2 |
Themengebiete: | 400 Sprache > 400 Sprache |
ISSN: | 1943-3921 |
Sprache: | Englisch |
Dokumenten ID: | 55597 |
Datum der Veröffentlichung auf Open Access LMU: | 14. Jun. 2018, 09:59 |
Letzte Änderungen: | 04. Nov. 2020, 13:35 |