2023年度 人工知能学会全国大会(第37回)

講演情報

国際セッション

国際セッション » IS-1 Knowledge engineering

[2U6-IS-1c] Knowledge engineering

2023年6月7日(水) 17:30 〜 18:50 U会場 (遠隔)

Chair: Akinori Abe (Chiba university)

18:10 〜 18:30

[2U6-IS-1c-03] Improving symbolic music pre-training using bar-level variational inference

〇YINGFENG FU1, Yusuke Tanimura2, Hidemoto Nakada2 (1. University of Tsukuba, 2. AIST)

[[Online, Working-in-progress]]

キーワード:pre-training, music understanding, NLP

Pre-training has been a significant trend in NLP nowadays. BERT-liked models showed power in solving downstream tasks. Inspired by the masked language model pre-training strategy, context could be learned by recovering the masked musical tokens. In our previous work, we tested the ability of MusicBERT and improved the model structure. The models worked well on the melody extraction task (a token-level classification task). But when facing sequential tasks like composer and emotion classification, our previous models' performance still needs improvement. The possible reason is that, the previous pre-training method cannot learn the general information of the sequence from the context. We proposed the bar-level recovery pre-training task using variational inference to solve this problem. Our proposed method aims to better learn general sequential information from context. In our in-progress work, we compared our method with the previous works.

講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。

パスワード