JSAI2023

Presentation information

International Session

International Session » IS-1 Knowledge engineering

[2U6-IS-1c] Knowledge engineering

Wed. Jun 7, 2023 5:30 PM - 6:50 PM Room U (Online)

Chair: Akinori Abe (Chiba university)

6:10 PM - 6:30 PM

[2U6-IS-1c-03] Improving symbolic music pre-training using bar-level variational inference

〇YINGFENG FU1, Yusuke Tanimura2, Hidemoto Nakada2 (1. University of Tsukuba, 2. AIST)

[[Online, Working-in-progress]]

Keywords:pre-training, music understanding, NLP

Pre-training has been a significant trend in NLP nowadays. BERT-liked models showed power in solving downstream tasks. Inspired by the masked language model pre-training strategy, context could be learned by recovering the masked musical tokens. In our previous work, we tested the ability of MusicBERT and improved the model structure. The models worked well on the melody extraction task (a token-level classification task). But when facing sequential tasks like composer and emotion classification, our previous models' performance still needs improvement. The possible reason is that, the previous pre-training method cannot learn the general information of the sequence from the context. We proposed the bar-level recovery pre-training task using variational inference to solve this problem. Our proposed method aims to better learn general sequential information from context. In our in-progress work, we compared our method with the previous works.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password