Presentation information

Organized Session

Organized Session » OS-18

[2G5-OS-18a] 脳波から音声言語情報を抽出・識別・利用する(1/2)

Wed. Jun 15, 2022 3:20 PM - 4:40 PM Room G (Room G)

オーガナイザ:新田 恒雄(豊橋技術科学大学)[現地]、桂田 浩一(東京理科大学)、入部 百合絵(愛知県立大学)、田口 亮(名古屋工業大学)、篠原 修二(東京大学)

3:20 PM - 3:40 PM

[2G5-OS-18a-01] Verification of general-purpose language models and deep learning models for estimating brain activities evoked by verbal stimuli

〇Rikako Sumida1, Hiroto Yamaguchi2,3, Tomoya Nakai3, Shinji Nishimoto2,3, Ichiro Kobayashi1 (1. Ochanomizu University, 2. Osaka University, 3. National Institute of Information and Communications Technology)

Keywords:fMRI, Deep Learning , language processing

For the estimation of brain states in spoken conversation stimuli, we conducted an experiment using three types of deep learning models (Bi-LSTM/Bi-GRU/Bi-RNN) to estimate brain activity data using speech spectrogram as speech features, and compared the estimation performance of each model. There was no significant difference in the performance of any of those models, and we confirmed that the brain regions close to the ears, which are considered to be responsible for phonological and grammatical processing, responded better.
In addition, we predicted brain activity using linguistic features transcribed from auditory stimuli into text. We used RoBERTa/BERT/word2vec as a general-purpose language model to convert them into embedded vectors. In this experiment, we could confirm responses in a wide range of language areas in the brain, not limited to the peripheral regions of the ear.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.