Journal
APPLIED PSYCHOLOGICAL MEASUREMENT
Volume 33, Issue 1, Pages 42-57Publisher
SAGE PUBLICATIONS INC
DOI: 10.1177/0146621607314044
Keywords
differential item functioning; item bias; measurement invariance; item response theory; likelihood ratio
Ask authors/readers for more resources
Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is based on a subset of group-invariant items called designated anchors. In this research, a quick and easy strategy for empirically selecting designated anchors is proposed and evaluated in simulations. Although the proposed rank-based approach is applicable to any method for DIF testing, this article focuses on likelihood-ratio (LR) comparisons between nested two-group item response models. The rank-based strategy frequently identified a group-invariant designated anchor set that produced more accurate LR test results than those using all other items as anchors. Group-invariant anchors were more difficult to identify as the percentage of differentially functioning items increased. Advice for practitioners is offered.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available