Abstract
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.
Item Type: | Journal article |
---|---|
Faculties: | Languages and Literatures > Department 2 |
Subjects: | 400 Language > 400 Language |
ISSN: | 1943-3921 |
Language: | English |
Item ID: | 55597 |
Date Deposited: | 14. Jun 2018, 09:59 |
Last Modified: | 04. Nov 2020, 13:35 |