Logo Logo
Hilfe
Hilfe
Switch Language to English

Stefan, Angelika M. ORCID logoORCID: https://orcid.org/0000-0003-3382-4746; Schönbrodt, Felix D. ORCID logoORCID: https://orcid.org/0000-0002-8282-3910; Evans, Nathan J. und Wagenmakers, Eric-Jan (2022): Efficiency in sequential testing: Comparing the sequential probability ratio test and the sequential Bayes factor test. In: Behavior Research Methods, Bd. 54, Nr. 6: S. 3100-3117

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

In a sequential hypothesis test, the analyst checks at multiple steps during data collection whether sufficient evidence has accrued to make a decision about the tested hypotheses. As soon as sufficient information has been obtained, data collection is terminated. Here, we compare two sequential hypothesis testing procedures that have recently been proposed for use in psychological research: Sequential Probability Ratio Test (SPRT; Psychological Methods, 25(2), 206–226, 2020) and the Sequential Bayes Factor Test (SBFT; Psychological Methods, 22(2), 322–339, 2017). We show that although the two methods have different philosophical roots, they share many similarities and can even be mathematically regarded as two instances of an overarching hypothesis testing framework. We demonstrate that the two methods use the same mechanisms for evidence monitoring and error control, and that differences in efficiency between the methods depend on the exact specification of the statistical models involved, as well as on the population truth. Our simulations indicate that when deciding on a sequential design within a unified sequential testing framework, researchers need to balance the needs of test efficiency, robustness against model misspecification, and appropriate uncertainty quantification. We provide guidance for navigating these design decisions based on individual preferences and simulation-based design analyses.

Dokument bearbeiten Dokument bearbeiten