9:40 AM - 10:00 AM
[3A1-GS-6-03] Time-aware Language Model using Multi-task Learning
Keywords:Language Model, Temporal Knowledge, Multi-task Learning
Temporal event understanding is helpful in many downstream natural language processing tasks. Understanding time requires common knowledge of the various temporal aspects of events, such as duration and temporal order. However, direct expressions that imply such temporal knowledge are often omitted in sentences. Therefore, our goal is to construct a general-purpose language model for understanding temporal common sense in Japanese. In this study, we conducted multi-task learning on several temporal tasks. Especially, we used the English temporal commonsense dataset MC-TACO translated into Japanese, in addition to the other temporal classification tasks in tense, time span, temporal order, and facticity. We employed a multilingual language model as the text encoder, as well as a Japanese language model. Our experimental results showed that the choice of the tasks for the multi-task training, as well as the language model used play an important role in improving the overall performance of the tasks.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.