Projects per year
Abstract
Inappropriate reliance on automated advice can result in humans accepting incorrect or rejecting correct advice. Increased automation transparency and trust calibration feedback are principles purported to promote accurate automation use. We examined the effects of automation transparency, trust calibration feedback, and their potential interacting effect on automation use accuracy and other outcomes. Participants completed uninhabited vehicle management missions by agreeing/disagreeing with automated advice. Transparency was manipulated within-subjects (low, high) and trust calibration feedback between-subjects (absent, present). If trust was inappropriate, trust calibration feedback instructed participants to take their time and carefully check display information. Higher transparency benefited automation use accuracy, decision time, perceived workload, trust, and usability. Trust calibration feedback had no benefit on automation use accuracy and subsequently did not amplify the benefits of increased transparency. These findings have potential implications to inform the design of automated decision aids to support human understanding of, and calibration to, automation capabilities.
Original language | English |
---|---|
Number of pages | 11 |
Journal | International Journal of Human-Computer Interaction |
DOIs | |
Publication status | E-pub ahead of print - 16 Apr 2025 |
Fingerprint
Dive into the research topics of 'Calibrating Reliance on Automated Advice: Transparency and Trust Calibration Feedback'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Adapting Automation Transparency to Allow Accurate Use by Humans
Loft, S. (Investigator 01)
ARC Australian Research Council
1/01/19 → 31/01/25
Project: Research