4:40 PM - 5:00 PM
[2I5-OS-9a-05] Explainable AI for Predicting Personalized Emotion Ratings
[[Online]]
Keywords:Emotion, Explainable AI, Ordinal Model
Post hoc approaches that seek explanations in deep models using reverse engineering and other methods have been widely used in Explainable AI. However, approaches that aim to build models that are inherently interpretable by limiting complexity have not yet been much explored, at least in the field of Affective Computing. In this study, we aim to achieve both high predictive performance and interpretability by integrating an explanatory item response model for ordinal scales, for which psychological interpretation is well established, into a deep neural network. Experiments were conducted to confirm the extent to which the proposed method can predict an individual's perceived emotion from the facial expressions of others, and good prediction results were obtained. The proposed method is expected to be used for education and support of interpersonal interaction as a complementary method to the post-hoc method.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.