Projects per year
Objective: Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. Background: Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator’s manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. Method: Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. Results: For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. Conclusion: Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. Application: The findings have potential application to the design of interfaces to improve human–automation teaming, and for anticipating the impact of decision risk on operator behavior.
FingerprintDive into the research topics of 'The Impact of Transparency and Decision Risk on Human–Automation Teaming Outcomes'. Together they form a unique fingerprint.
- 1 Finished