Skip to main navigation Skip to search Skip to main content

Calibrating Reliance on Automated Advice: Transparency and Trust Calibration Feedback

Research output: Contribution to journalArticlepeer-review

Abstract

Inappropriate reliance on automated advice can result in humans accepting incorrect or rejecting correct advice. Increased automation transparency and trust calibration feedback are principles purported to promote accurate automation use. We examined the effects of automation transparency, trust calibration feedback, and their potential interacting effect on automation use accuracy and other outcomes. Participants completed uninhabited vehicle management missions by agreeing/disagreeing with automated advice. Transparency was manipulated within-subjects (low, high) and trust calibration feedback between-subjects (absent, present). If trust was inappropriate, trust calibration feedback instructed participants to take their time and carefully check display information. Higher transparency benefited automation use accuracy, decision time, perceived workload, trust, and usability. Trust calibration feedback had no benefit on automation use accuracy and subsequently did not amplify the benefits of increased transparency. These findings have potential implications to inform the design of automated decision aids to support human understanding of, and calibration to, automation capabilities.

Original languageEnglish
Pages (from-to)14723-14733
Number of pages11
JournalInternational Journal of Human-Computer Interaction
Volume41
Issue number23
Early online date16 Apr 2025
DOIs
Publication statusPublished - 2025

Funding

FundersFunder number
ARC Australian Research Council FT190100812

    Fingerprint

    Dive into the research topics of 'Calibrating Reliance on Automated Advice: Transparency and Trust Calibration Feedback'. Together they form a unique fingerprint.

    Cite this