JSAI2019

Presentation information

General Session

General Session » [GS] J-1 Fundamental AI, theory

[4C2-J-1] Fundamental AI, theory: brain-based design of intelligence

Fri. Jun 7, 2019 12:00 PM - 1:20 PM Room C (4F International conference hall)

Chair:Hiroki Terashima Reviewer:Yoshimasa Tawatsuji

12:40 PM - 1:00 PM

[4C2-J-1-03] Generating Natural Language Descriptions with Brain Activity Data Evoked by Video Stimuli using Deep Learning

〇Kaei Cho1, Satoshi Nishida2, Shinji Nishimoto2, Ichiro Kobayashi1 (1. Ochanomizu University, 2. National Institute of Information and Communications Technology)

Keywords:brain and neuroscience, video captioning

Quantitative analyses of human brain activity based on language representations, such as semantic categories of words, has been actively studied in brain and neuroscience. This study attempts to generate natural language descriptions for human brain activation phenomena evoked by video stimuli by employing deep learning. Due to the lack of brain training data, the proposed method employs a pre-trained S2VT (end-to-end sequence-to-sequence model to generate captions for videos). To apply brain activity data to the video captioning model, we train a model to learn the corresponding relationship between brain activity data and video features. As result of experiments, we have not yet been successful in generating appropriate sentences. We will further devise the architecture.