The Trade-Off between Model Fit, Invariance, and Validity: The Case of PISA Science Assessments

Yasmine H. El Masri, David Andrich

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)


In large-scale educational assessments, it is generally required that tests are composed of items that function invariantly across the groups to be compared. Despite efforts to ensure invariance in the item construction phase, for a range of reasons (including the security of items) it is often necessary to account for differential item functioning (DIF) of items post hoc. This typically requires a choice among retaining an item as it is despite its DIF, deleting the item, or resolving (splitting) an item by creating a distinct item for each group. These options involve a trade-off between model fit and the invariance of item parameters, and each option could be valid depending on whether or not the source of DIF is relevant or irrelevant to the variable being assessed. We argue that making a choice requires a careful analysis of statistical DIF and its substantive source. We illustrate our argument by analyzing PISA 2006 science data of three countries (UK, France and Jordan) using the Rasch model, which was the model used for the analyses of all PISA 2006 data. We identify items with real DIF across countries and examine the implications for model fit, invariance, and the validity of cross-country comparisons when these items are either eliminated, resolved or retained.

Original languageEnglish
Pages (from-to)174-188
Number of pages15
JournalApplied Measurement in Education
Issue number2
Publication statusPublished - 2 Apr 2020


Dive into the research topics of 'The Trade-Off between Model Fit, Invariance, and Validity: The Case of PISA Science Assessments'. Together they form a unique fingerprint.

Cite this