Presentation information

Organized Session

Organized Session » OS-18

[2G6-OS-18b] 脳波から音声言語情報を抽出・識別・利用する(2/2)

Wed. Jun 15, 2022 5:20 PM - 6:20 PM Room G (Room G)

オーガナイザ:新田 恒雄(豊橋技術科学大学)[現地]、桂田 浩一(東京理科大学)、入部 百合絵(愛知県立大学)、田口 亮(名古屋工業大学)、篠原 修二(東京大学)

5:40 PM - 6:00 PM

[2G6-OS-18b-02] Classification of speech-imagery and non-recollection in EEG

〇Daisuke Suzuki1, Motoharu Yamao1, Yurie Iribe1, Ryo Taguchi2, Kouichi Katsurada3, Tsuneo Nitta4 (1. Aichi Prefectural University, 2. Nagoya Institute of Technology, 3. Tokyo University of Science, 4. Toyohashi University of Technology)

Keywords:BCI, EEG, speech-imagery, non-recollection

Brain Computer Interface (BCI) research has been started to identify recalled syllables from Electroencephalogram (EEG) during speech-imagery. Currently, it is difficult to identify the true recall duration from EEG data. Therefore, inaccurate recall data including non-recollection duration or recall sections labeled by visual determination of spectrum outline are often used to identify the recalled syllables. Because the visual syllable labeling takes a lot of time and labor, it is desirable that the process to discriminate correct speech-imagery segments has been automated. In this paper, we constructed each model consisting of speech-imagery segments and non-recollection segments to obtain the true syllable sections. We extracted the complex cepstrum from the syllable-labeled speech-imagery/non-recollection data by visual determination and identified speech-imagery/non-recollection segments using the features. Lastly, we report the classification results by 10-fold cross validation.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.