JSAI2023

Presentation information

General Session

General Session » GS-5 Language media processing

[3A1-GS-6] Language media processing

Thu. Jun 8, 2023 9:00 AM - 10:40 AM Room A (Main hall)

座長:是枝 祐太(日立製作所) [現地]

9:40 AM - 10:00 AM

[3A1-GS-6-03] Time-aware Language Model using Multi-task Learning

〇Hikari Funabiki1, Lis Kanashiro Pereira1, Mayuko Kimura1, Masayuki Asahara2, Ayako Ochi2, Fei Cheng3, Ichiro Kobayashi1 (1. Ochanomizu University, 2. National Institute for Japanese Language and Linguistics, 3. Kyoto University)

Keywords:Language Model, Temporal Knowledge, Multi-task Learning

Temporal event understanding is helpful in many downstream natural language processing tasks. Understanding time requires common knowledge of the various temporal aspects of events, such as duration and temporal order. However, direct expressions that imply such temporal knowledge are often omitted in sentences. Therefore, our goal is to construct a general-purpose language model for understanding temporal common sense in Japanese. In this study, we conducted multi-task learning on several temporal tasks. Especially, we used the English temporal commonsense dataset MC-TACO translated into Japanese, in addition to the other temporal classification tasks in tense, time span, temporal order, and facticity. We employed a multilingual language model as the text encoder, as well as a Japanese language model. Our experimental results showed that the choice of the tasks for the multi-task training, as well as the language model used play an important role in improving the overall performance of the tasks.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password