JSAI2022

Presentation information

Organized Session

Organized Session » OS-18

[2G6-OS-18b] 脳波から音声言語情報を抽出・識別・利用する(2/2)

Wed. Jun 15, 2022 5:20 PM - 6:20 PM Room G (Room G)

オーガナイザ:新田 恒雄(豊橋技術科学大学)[現地]、桂田 浩一(東京理科大学)、入部 百合絵(愛知県立大学)、田口 亮(名古屋工業大学)、篠原 修二(東京大学)

6:00 PM - 6:20 PM

[2G6-OS-18b-03] Accent discrimination from speech-imagery EEG.

〇Takuro Fukuda1, Shun Sawada1, Hidehumi Ohmura1, Kouichi Katsurada1, Motoharu Yamao2, Yurie Iribe2, Ryo Taguchi3, Tsuneo Nitta4 (1. Tokyo University of Science, 2. Aichi Prefectural University, 3. Nagoya Institute of Technology, 4. Toyohashi University of Technology)

Keywords:EEG, Speech-imagery, Accent discrimination

Although analysis of speech imagery electroencephalogram (EEG) has been actively conducted, there have been reported few numbers of results that focus on pitch accent, which is a linguistic feature of imagined speech. In this report, we propose a complex cepstrum-based accent discrimination from speech-imagery EEG signals. We first created a database containing the intervals of imagined spoken syllables that is visually labeled from the line spectral patterns of EEG signals obtained after the pooling process of electrodes. Then, we construct an accent discriminator using the complex cepstrum calculated from the amplitude spectrum from the EEG signals during speech-imagery. In the discrimination process, the eigenspaces are designed for each accent from the training data. The results of experiments using the subspace method and the tensor product-based compound similarity method showed satisfactory scores in discriminating the different types of accents of imagined two-syllable speeches.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password