JSAI2020

Presentation information

Organized Session

Organized Session » OS-25

[4F2-OS-25a] OS-25 (1)

Fri. Jun 12, 2020 12:00 PM - 1:40 PM Room F (jsai2020online-6)

熊野 史朗(NTT)、寺田 和憲(岐阜大学)、鈴木 健嗣(筑波大学)

1:00 PM - 1:20 PM

[4F2-OS-25a-03] Multimodal Evoked Emotion Prediction and its Application to ASMR Video Analysis

Yu Yang1, 〇Chie Hieida2, Takato Horii3,4, Takayuki Nagai3,1 (1. The University of Electro-Communications, 2. Nara Institute of Science and Technology, 3. Osaka University, 4. International Research Center for Neurointelligence, The University of Tokyo)

Keywords:Evoked emotion prediction, Multimodal, Deeplearning, Autonomous sensory meridian response

With the development of digital terminals such as smartphones and tablets,videos that available for users to watch has reached to an enormous amount. In this context, applications such as classification, retrieval and distribution of personalized video content to meet the consumer needs remains a challenge. In general, humans tend to choose movies and music based on emotional characteristics. Therefore, evoked emotion analyzing may provide a guideline for this task. Emotions evoked by video are related to both audio and video modalities. In this study, we propose a deep learning model that estimates movie-evoked emotion by integrating multimodal information. Experiments using a movie database verify the change in estimation performance due to the integration of multimodal information, and show that the accuracy is improved compared to the conventional method. In addition, we analyze Autonomous Sensory Meridian Response (ASMR) videos, which have recently become a hot topic, and examine the relationship between evoked emotion and viewer behavior such as the number of views and likes/dislikes rates.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password