Abstract
Testing item fit is central in item response theory (IRT) modeling, since a good fit is necessary to draw valid inferences from estimated model parameters. Infit and outfit fit statistics, widespread indices for detecting deviations from the Rasch model, are affected by data factors, such as sample size. Consequently, the traditional use of fixed infit and outfit cutoff points is an ineffective practice. This article evaluates if confidence intervals estimated via parametric bootstrapping provide more suitable cutoff points than the conventionally applied range of 0.8-1.2, and outfit critical ranges adjusted by sample size. The performance is evaluated under different sizes of misfit, sample sizes, and number of items. Results show that the confidence intervals performed better in terms of power, but had inflated type-I error rates, which resulted from mean square values pushed below unity in the large size of misfit conditions. However, when performing a one-side test with the upper range of the confidence intervals, the forementioned inflation was fixed.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Psychologie und Pädagogik |
Themengebiete: | 100 Philosophie und Psychologie > 150 Psychologie |
ISSN: | 0895-7347 |
Sprache: | Englisch |
Dokumenten ID: | 114904 |
Datum der Veröffentlichung auf Open Access LMU: | 02. Apr. 2024, 08:07 |
Letzte Änderungen: | 02. Apr. 2024, 08:07 |