Logo Logo
Hilfe
Hilfe
Switch Language to English

Diaz, John Alexander Silva; Koehler, Carmen und Hartig, Johannes (2022): Performance of Infit and Outfit Confidence Intervals Calculated via Parametric Bootstrapping. In: Applied Measurement in Education, Bd. 35, Nr. 2: S. 116-132

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Testing item fit is central in item response theory (IRT) modeling, since a good fit is necessary to draw valid inferences from estimated model parameters. Infit and outfit fit statistics, widespread indices for detecting deviations from the Rasch model, are affected by data factors, such as sample size. Consequently, the traditional use of fixed infit and outfit cutoff points is an ineffective practice. This article evaluates if confidence intervals estimated via parametric bootstrapping provide more suitable cutoff points than the conventionally applied range of 0.8-1.2, and outfit critical ranges adjusted by sample size. The performance is evaluated under different sizes of misfit, sample sizes, and number of items. Results show that the confidence intervals performed better in terms of power, but had inflated type-I error rates, which resulted from mean square values pushed below unity in the large size of misfit conditions. However, when performing a one-side test with the upper range of the confidence intervals, the forementioned inflation was fixed.

Dokument bearbeiten Dokument bearbeiten