TY - JOUR
T1 - The Trade-Off between Model Fit, Invariance, and Validity
T2 - The Case of PISA Science Assessments
AU - El Masri, Yasmine H.
AU - Andrich, David
PY - 2020/4/2
Y1 - 2020/4/2
N2 - In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item functioning (DIF) of items post hoc. This typically requires a choice among retaining an item as it is despite its DIF, deleting the item, or resolving (splitting) an item by creating a distinct item for each group. These options involve a trade-off between model fit and the invariance of item parameters, and each option could be valid depending on whether or not the source of DIF is relevant or irrelevant to the variable being assessed. We argue that making a choice requires a careful analysis of statistical DIF and its substantive source. We illustrate our argument by analyzing PISA 2006 science data of three countries (UK, France and Jordan) using the Rasch model, which was the model used for the analyses of all PISA 2006 data. We identify items with real DIF across countries and examine the implications for model fit, invariance, and the validity of cross-country comparisons when these items are either eliminated, resolved or retained.
AB - In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item functioning (DIF) of items post hoc. This typically requires a choice among retaining an item as it is despite its DIF, deleting the item, or resolving (splitting) an item by creating a distinct item for each group. These options involve a trade-off between model fit and the invariance of item parameters, and each option could be valid depending on whether or not the source of DIF is relevant or irrelevant to the variable being assessed. We argue that making a choice requires a careful analysis of statistical DIF and its substantive source. We illustrate our argument by analyzing PISA 2006 science data of three countries (UK, France and Jordan) using the Rasch model, which was the model used for the analyses of all PISA 2006 data. We identify items with real DIF across countries and examine the implications for model fit, invariance, and the validity of cross-country comparisons when these items are either eliminated, resolved or retained.
UR - http://www.scopus.com/inward/record.url?scp=85080921180&partnerID=8YFLogxK
U2 - 10.1080/08957347.2020.1732384
DO - 10.1080/08957347.2020.1732384
M3 - Article
AN - SCOPUS:85080921180
VL - 33
SP - 174
EP - 188
JO - Applied Measurement in Education
JF - Applied Measurement in Education
SN - 0895-7347
IS - 2
ER -