Logo Logo
Help
Contact
Switch Language to German
Shi, Zhuanghua; Allenmark, Fredrik ORCID: 0000-0002-3127-4851; Zhu, Xiuna; Elliott, Mark A.; Müller, Hermann J. ORCID: 0000-0002-4774-5654 (February 2020): To quit or not to quit in dynamic search. In: Attention, perception, & psychophysics, Vol. 182: pp. 799-817
Full text not available from 'Open Access LMU'.

Abstract

Searching for targets among similar distractors requires more time as the number of items increases, with search efficiency measured by the slope of the reaction-time (RT)/set-size function. Horowitz and Wolfe (Nature, 394(6693), 575–577, 1998) found that the target-present RT slopes were as similar for “dynamic” as for standard static search, even though the items were randomly reshuffled every 110 ms in dynamic search. Somewhat surprisingly, attempts to understand dynamic search have ignored that the target-absent RT slope was as low (or “flat”) as the target-present slope—so that the mechanisms driving search performance under dynamic conditions remain unclear. Here, we report three experiments that further explored search in dynamic versus static displays. Experiment 1 confirmed that the target-absent:target-present slope ratio was close to or smaller than 1 in dynamic search, as compared with being close to or above 2 in static search. This pattern did not change when reward was assigned to either correct target-absent or correct target-present responses (Experiment 2), or when the search difficulty was increased (Experiment 3). Combining analysis of search sensitivity and response criteria, we developed a multiple-decisions model that successfully accounts for the differential slope patterns in dynamic versus static search. Two factors in the model turned out to be critical for generating the 1:1 slope ratio in dynamic search: the “quit-the-search” decision variable accumulated based upon the likelihood of “target absence” within each individual sample in the multiple-decisions process, whilst the stopping threshold was a linear function of the set size and reward manipulation.