Projects per year
We applied a computational model to examine the extent to which participants used an automated decision aid as an advisor, as compared to a more autonomous trigger of responding, at varying levels of decision aid reliability. In an air traffic control conflict detection task, we found higher accuracy when the decision aid was correct, and more errors when the decision aid was incorrect, as compared to a manual condition (no decision aid). Responses that were correct despite incorrect automated advice were slower than matched manual responses. Decision aids set at lower reliability (75%) had smaller effects on choices and response times, and were subjectively trusted less, than decision aids set at higher reliability (95%). We fitted an evidence accumulation model to choices and response times to measure how information processing was affected by decision aid inputs. Participants primarily treated low-reliability decision aids as an advisor rather than directly accumulating evidence based on its advice. Participants directly accumulated evidence based upon the advice of high-reliability decision aids, consistent with granting decision aids more autonomous influence over decisions. Individual differences in the level of direct accumulation correlated with subjective trust, suggesting a cognitive mechanism by which trust impacts human decisions.
FingerprintDive into the research topics of 'Automated Decision Aids: When Are They Advisors and When Do They Take Control of Human Decision Making?'. Together they form a unique fingerprint.
- 1 Active