TY - JOUR
T1 - Asynchronous interpretation of manual and automated audiometry: agreement and reliability
AU - Brennan-Jones, Christopher G.
AU - Eikelboom, Robert H.
AU - Bennett, Rebecca J.
AU - Tao, Karina F.M.
AU - Swanepoel, De Wet
PY - 2018/1
Y1 - 2018/1
N2 - Introduction: Remote interpretation of automated audiometry offers the potential to enable asynchronous tele-audiology assessment and diagnosis in areas where synchronous tele-audiometry may not be possible or practical. The aim of this study was to compare remote interpretation of manual and automated audiometry. Methods: Five audiologists each interpreted manual and automated audiograms obtained from 42 patients. The main outcome variable was the audiologist’s recommendation for patient management (which included treatment recommendations, referral or discharge) between the manual and automated audiometry test. Cohen’s Kappa and Krippendorff’s Alpha were used to calculate and quantify the intra- and inter-observer agreement, respectively, and McNemar’s test was used to assess the audiologist-rated accuracy of audiograms. Audiograms were randomised and audiologists were blinded as to whether they were interpreting a manual or automated audiogram. Results: Intra-observer agreement was substantial for management outcomes when comparing interpretations for manual and automated audiograms. Inter-observer agreement was moderate between clinicians for determining management decisions when interpreting both manual and automated audiograms. Audiologists were 2.8 times more likely to question the accuracy of an automated audiogram compared to a manual audiogram. Discussion: There is a lack of agreement between audiologists when interpreting audiograms, whether recorded with automated or manual audiometry. The main variability in remote audiogram interpretation is likely to be individual clinician variation, rather than automation. © 2016, © The Author(s) 2016.
AB - Introduction: Remote interpretation of automated audiometry offers the potential to enable asynchronous tele-audiology assessment and diagnosis in areas where synchronous tele-audiometry may not be possible or practical. The aim of this study was to compare remote interpretation of manual and automated audiometry. Methods: Five audiologists each interpreted manual and automated audiograms obtained from 42 patients. The main outcome variable was the audiologist’s recommendation for patient management (which included treatment recommendations, referral or discharge) between the manual and automated audiometry test. Cohen’s Kappa and Krippendorff’s Alpha were used to calculate and quantify the intra- and inter-observer agreement, respectively, and McNemar’s test was used to assess the audiologist-rated accuracy of audiograms. Audiograms were randomised and audiologists were blinded as to whether they were interpreting a manual or automated audiogram. Results: Intra-observer agreement was substantial for management outcomes when comparing interpretations for manual and automated audiograms. Inter-observer agreement was moderate between clinicians for determining management decisions when interpreting both manual and automated audiograms. Audiologists were 2.8 times more likely to question the accuracy of an automated audiogram compared to a manual audiogram. Discussion: There is a lack of agreement between audiologists when interpreting audiograms, whether recorded with automated or manual audiometry. The main variability in remote audiogram interpretation is likely to be individual clinician variation, rather than automation. © 2016, © The Author(s) 2016.
U2 - 10.1177/1357633X16669899
DO - 10.1177/1357633X16669899
M3 - Article
C2 - 27650162
SN - 1357-633X
VL - 24
SP - 37
EP - 43
JO - Journal of Telemedicine and Telecare
JF - Journal of Telemedicine and Telecare
IS - 1
ER -