10:00 AM - 10:20 AM
[1J1-OS-21-01] Adaptive trust calibration and its applicaeions
Keywords:trust calibration, meta-suggetion, trust
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance
of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting theinappropriate calibration status by monitoring the user's reliance behavior and cognitive
cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We will also talk on practical applications of our framework.
of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting theinappropriate calibration status by monitoring the user's reliance behavior and cognitive
cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We will also talk on practical applications of our framework.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.