4:10 PM - 4:30 PM
[2T5-OS-5b-03] Does Context Affect the Rationales for Human Awareness? Towards Understanding Human Decision Criteria for Trustworty Explainable AI
Keywords:Explainable AI, Decision making, Trustworty AI
As Artificial Intelligence (AI) achieves high predictive accuracy, its utilization in supporting human predictive tasks has advanced significantly. As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. Explainable AI (XAI) has been developed to bridge this gap by providing rational explanations to help comprehension. Despite this, it remains unclear what constitutes an effective explanation to foster trust in AI. This study focuses on two factors affecting AI trust, the importance of decision outcomes and decision-making content, to explore how the basis for human judgment without AI. Our findings suggest that the necessity of explanation for AI trust varies with the context of AI use, indicating that the explanation needed to gain human trust differs according to the scenario with AI.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.