Abstract
Human-automation team effectiveness was examined using submarine simulators. Experiment 1
demonstrated human-automation teams had superior performance during highly demanding tasks
when team communications were more versus less coordinated. A new theoretical framework for
human-automation team trust was then proposed, the Human-Autonomy Trust Expectation Model
(HATEM). Experiment 2 validated HATEM by demonstrating humans trust automation more when
confident they can predict the circumstances when automation will fail. Experiment 3 used HATEM to
qualify claims that anthropomorphisation improves human-automation trust by demonstrating
anthropomorphic features make no difference unless they improve human confidence in predicting
those circumstances when automation is likely to fail.
demonstrated human-automation teams had superior performance during highly demanding tasks
when team communications were more versus less coordinated. A new theoretical framework for
human-automation team trust was then proposed, the Human-Autonomy Trust Expectation Model
(HATEM). Experiment 2 validated HATEM by demonstrating humans trust automation more when
confident they can predict the circumstances when automation will fail. Experiment 3 used HATEM to
qualify claims that anthropomorphisation improves human-automation trust by demonstrating
anthropomorphic features make no difference unless they improve human confidence in predicting
those circumstances when automation is likely to fail.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Thesis sponsors | |
Award date | 15 Feb 2024 |
DOIs | |
Publication status | Unpublished - 2023 |