Modelling how humans use decision aids in simulated air traffic control
Access Status
Date
2020Type
Metadata
Show full item recordCitation
Source Conference
Faculty
School
Collection
Abstract
Air traffic controllers must often decide whether pairs of aircraft will violate safe standards of separation in the future, a task known as conflict detection. Recent research has applied evidence accumulation models (e.g., the linear ballistic accumulator; Brown & Heathcote, 2008) to simulated conflict detection tasks, to examine how the cognitive processes underlying conflict detection are affected by workplace factors such as time pressure and multiple task demands (e.g., Boag, Strickland, Loft & Heathcote, 2019). To meet increasing air traffic demands in future, controllers will increasingly require assistance from automation. Although automation can increase efficiency and overall performance, it may also decrease operator engagement, leading to potentially dire consequences in the event of an automation failure. In the current study, we applied the linear ballistic accumulator model to examine how humans adapt to automated decision aids when performing simulated conflict detection. Participants performed manual conditions, in which they made conflict detection decisions with no assistance. They also performed automated conditions, in which they were provided an (accurate but not perfect) decision aid that recommended a decision on each trial. We found that decision aids improved performance, primarily by inhibiting evidence accumulation towards the incorrect decision. Similarly, incorrect decision aids (i.e., automation failures) impaired performance because accumulation to the correct decision was inhibited. To account for these findings, we develop a framework for understanding human information integration with potentially broad applications. Future research should investigate how cognitive processes are affected by differing levels of automation reliability, and test whether our model applies to other important task contexts.
Related items
Showing items related by title, author, creator and subject.
-
Strickland, Luke ; Heathcote, Andrew; Bowden, Vanessa; Boag, Russell; Wilson, Micah ; Khan, Samha; Loft, Shayne (2021)Humans increasingly use automated decision aids. However, environmental uncertainty means that automated advice can be incorrect, creating the potential for humans to action incorrect advice or to disregard correct advice. ...
-
Strickland, Luke ; Boag, Russell; Heathcote, Andrew; Bowden, Vanessa; Loft, Shayne (2022)We applied a computational model to examine the extent to which participants used an automated decision aid as an advisor, as compared to a more autonomous trigger of responding, at varying levels of decision aid reliability. ...
-
Griffiths, Natalie; Bowden, Vanessa K; Wee, Serena; Strickland, Luke ; Loft, Shayne (2024)Humans working in modern work systems are increasingly required to supervise task automation. We examined whether manual aircraft conflict detection skill predicted participants’ ability to respond to conflict detection ...