JSAI2021

Presentation information

Organized Session

Organized Session » OS-4

[4D3-OS-4b] Affective Computing(2/3)

Fri. Jun 11, 2021 1:40 PM - 3:20 PM Room D (OS room 2)

座長:寺田 和憲(岐阜大学)

2:40 PM - 3:00 PM

[4D3-OS-4b-04] Hyperspherical Representation of Emotion by Combining Recognition and Unification Tasks Based on Multimodal Fusion

〇Seiichi Harata1, Takuto Sakuma1, Shohei Kato1,2 (1. Dept. of Engineering, Graduate School of Engineering, Nagoya Institute of Technology, 2. Frontier Research Institute for Information Science, Nagoya Institute of Technology)

Keywords:Affective Computing, Multimodal Fusion, Emotion Recognition, Deep Neural Networks, Emotional Space

To emulate human emotions in agents, the mathematical representation of emotion (an emotional space) is essential for each component, such as emotion recognition, generation, and expression. This study aims to model human emotion perception by acquiring a modality-independent emotional space by extracting shared emotional information from different modalities. We propose a method of acquiring a hyperspherical emotional space by fusing multimodalities on a DNN and combining the emotion recognition task and the unification task. The emotion recognition task learns the representation of emotions, and the unification task learns an identical emotional space from each modality. Through the experiments with audio-visual data, we confirmed that the proposed method could adequately represent emotions in a low-dimensional hyperspherical emotional space under this paper's experimental conditions. We also confirmed that the proposed method's emotional representation is modality-independent by measuring the robustness of the emotion recognition in the available modalities through a modality ablation experiment.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password