Logo Logo
Hilfe
Hilfe
Switch Language to English

Kraus, Elisabeth Barbara ORCID logoORCID: https://orcid.org/0000-0001-8007-0321; Wild, Johannes und Hilbert, Sven (2024): Using Interpretable Machine Learning for Differential Item Functioning Detection in Psychometric Tests. In: Applied Psychological Measurement, Bd. 48, Nr. 4-5: S. 167-186 [PDF, 1MB]

Abstract

This study presents a novel method to investigate test fairness and differential item functioning combining psychometrics and machine learning. Test unfairness manifests itself in systematic and demographically imbalanced influences of confounding constructs on residual variances in psychometric modeling. Our method aims to account for resulting complex relationships between response patterns and demographic attributes. Specifically, it measures the importance of individual test items, and latent ability scores in comparison to a random baseline variable when predicting demographic characteristics. We conducted a simulation study to examine the functionality of our method under various conditions such as linear and complex impact, unfairness and varying number of factors, unfair items, and varying test length. We found that our method detects unfair items as reliably as Mantel–Haenszel statistics or logistic regression analyses but generalizes to multidimensional scales in a straight forward manner. To apply the method, we used random forests to predict migration backgrounds from ability scores and single items of an elementary school reading comprehension test. One item was found to be unfair according to all proposed decision criteria. Further analysis of the item’s content provided plausible explanations for this finding.

Dokument bearbeiten Dokument bearbeiten