Evaluating random error in clinician-administered surveys: Theoretical considerations and clinical applications of interobserver reliability and agreement

Rebecca J. Bennett, Dunay S. Taljaard, Michelle Olaithe, Chris Brennan-Jones, Robert H. Eikelboom

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Purpose: The purpose of this study is to raise awareness of interobserver concordance and the differences between interobserver reliability and agreement when evaluating the responsiveness of a clinician-administered survey and, specifically, to demonstrate the clinical implications of data types (nominal/categorical, ordinal, interval, or ratio) and statistical index selection (for example, Cohen’s kappa, Krippendorff’s alpha, or interclass correlation). Methods: In this prospective cohort study, 3 clinical audiologists, who were masked to each other’s scores, administered the Practical Hearing Aid Skills Test–Revised to 18 adult owners of hearing aids. Interobserver concordance was examined using a range of reliability and agreement statistical indices. Results: The importance of selecting statistical measures of concordance was demonstrated with a worked example, wherein the level of interobserver concordance achieved varied from “no agreement” to “almost perfect agreement” depending on data types and statistical index selected. Conclusions: This study demonstrates that the methodology used to evaluate survey score concordance can influence the statistical results obtained and thus affect clinical interpretations.

Original languageEnglish
Pages (from-to)191-201
Number of pages11
JournalAmerican Journal of Audiology
Volume26
Issue number3
DOIs
Publication statusPublished - 1 Sept 2017

Fingerprint

Dive into the research topics of 'Evaluating random error in clinician-administered surveys: Theoretical considerations and clinical applications of interobserver reliability and agreement'. Together they form a unique fingerprint.

Cite this