Presentation information

Organized Session

Organized Session » OS-18

[2G5-OS-18a] 脳波から音声言語情報を抽出・識別・利用する(1/2)

Wed. Jun 15, 2022 3:20 PM - 4:40 PM Room G (Room G)

オーガナイザ:新田 恒雄(豊橋技術科学大学)[現地]、桂田 浩一(東京理科大学)、入部 百合絵(愛知県立大学)、田口 亮(名古屋工業大学)、篠原 修二(東京大学)

3:40 PM - 4:00 PM

[2G5-OS-18a-02] Classification of Words from Inner Speech Using A Deep Learning Model Trained on EEG Data

〇Nao Yukawa Yukawa1, Masahiro Suzuki1, Yutaka Matsuo1 (1. Univ. of Tokyo)


Keywords:transfer learning, speech decoding, EEG, inner speech

Decoding inner speech from brain activity data can not just facilitate communication in patients with disabilities, but can lead to better understanding of metacognition. In a previous study, a deep learning model called EEGNet was used on inner speech decoding. However, it only achieved 30%
of accuracy for a 4-class classification task. Here, the use of transfer learning is considered to be more effective.
However, transfer learning has not yet been applied to inner speech. Even for EEG data in general, the effectiveness of
transfer learning on various dataset has not been sufficiently verified.
This study verifies the improvement of feature extraction by performing transfer learning on inner speech dataset using EEG data of different tasks and non-EEG data.
The result confirms that the accuracy of inner speech is improved by transfer learning that uses data from
different subjects, but not by transfer learning which uses EEG data from different tasks. On the other hand
for the image dataset, the improvement of the accuracy was confirmed by freezing some layers, even though the
nature of the data is different from that of EEG data.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.