JSAI2019

Presentation information

Organized Session

Organized Session » [OS] OS-20

[3P3-OS-20] 脳波から音声言語情報を抽出・識別する

Thu. Jun 6, 2019 1:50 PM - 3:30 PM Room P (Front-left room of 1F Exhibition hall)

新田 恒雄(早稲田大学/豊橋技術科学大学)、桂田 浩一(東京理科大学)、入部 百合絵(愛知県立大学)、田口 亮(名古屋工業大学)

1:50 PM - 2:10 PM

[3P3-OS-20-01] Describing Brain Activity Evoked by Speech Stimuli

〇Rino Urushihara1, Ichiro Kobayashi1, Hiroto Yamaguchi2,3, Tomoya Nakai2,3, Shinji Nishimoto2,3 (1. Ochanomizu University, 2. National Institute of Information and Communications Technology, 3. Osaka University)

Keywords:Neuroscience

The analysis of semantic activities in the human brain is an area of active field of study. In this paper, we propose a deep learning method to describe text for semantic representations evoked by speech stimuli from Functional Magnetic Resonance Imaging (fMRI) brain data. Thereby, our study aims to decode higher order perception which a person recalled in the brain by speech stimuli. However, collecting a large-scale brain activity dataset is difficult because observing brain activity data with fMRI is expensive, although a method with deep learning requires a large-scale dataset. We therefore use an automatic speech recognition method and utilize a small amounts of fMRI data efficiently for machine learning. Through experiments, we have conformed high correlation between the predicted features from fMRI data and the speech features.