6:00 PM - 6:20 PM
[1G4-OS-13b-03] Adaptive Trust Calibration for Human-AI Collaboration
Keywords:Human Agent Interaction, Trust
Poor trust calibration in human-AI collaboration often degrades the total
system performance in terms of safety and efficiency. Existing studies have
primarily examined the importance of system transparency in maintaining
proper trust calibration, with little emphasis on how to detect over-trust
and under-trust nor how to recover from them. With the goal of addressing
these research gaps, we propose a novel method of adaptive trust
calibration, which consists of a framework for detecting the status of
calibration and cognitive cues called ``trust calibration cues''. Our
framework and four types of trust calibration cues were evaluated in an
online experiment with a drone simulator. The result showed that presenting
the simple cues at the time of over-trust could significantly promote the
trust calibration.
system performance in terms of safety and efficiency. Existing studies have
primarily examined the importance of system transparency in maintaining
proper trust calibration, with little emphasis on how to detect over-trust
and under-trust nor how to recover from them. With the goal of addressing
these research gaps, we propose a novel method of adaptive trust
calibration, which consists of a framework for detecting the status of
calibration and cognitive cues called ``trust calibration cues''. Our
framework and four types of trust calibration cues were evaluated in an
online experiment with a drone simulator. The result showed that presenting
the simple cues at the time of over-trust could significantly promote the
trust calibration.