Equating a large-scale writing assessment using pairwise comparisons of performances

Stephen Humphry, Joshua Mcgrane

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)
12 Downloads (Pure)


© 2015, The Australian Association for Research in Education, Inc. This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper, pairwise comparisons were used to equate writing tests for two consecutive calendar years of Australia’s large-scale assessment program. The viability of the method was demonstrated by the very high internal reliability of the pairwise scale, acceptable fit between data and the models applied, and a very high correlation between the pairwise scale and the rubric scores for both calendar years. An equating constant was shown to be not statistically significant, consistent with stable population means for the writing assessment across calendar years. The method also provided external validation of the rubric assessments. The limitations and advantages of the method are described and broader implications of the findings of the results of the study are discussed.
Original languageEnglish
Pages (from-to)443-460
JournalThe Australian Educational Researcher
Issue number4
Early online date25 Feb 2015
Publication statusPublished - Sept 2015


Dive into the research topics of 'Equating a large-scale writing assessment using pairwise comparisons of performances'. Together they form a unique fingerprint.

Cite this