Presentation information

Interactive Session

[3Rin4] Interactive 1

Thu. Jun 11, 2020 1:40 PM - 3:20 PM Room R01 (jsai2020online-2-33)

[3Rin4-17] Generalizing Argumentative Link Identification Model by Reducing Dependence on Superficial Cues

〇Yuto Hanayasu1,2, Shunsuke Ikeda1,2, Makoto Kubodera1, Naoya Inoue3 (1.Nextremer Co., Ltd., 2.Tokyo Denki University, 3.Tohoku University)

Keywords:natural language processing, pre-trained language model, RoBERTa, BERT, argument mining

Large-scale pretrained language models are shown to be effective for identifying argumentative structures of texts as well as a wide range of natural language processing tasks. However, recent studies show that these models exploit dataset-specific biases (henceforth, superficial cues) for prediction, and that suppressing it could further improve the generalization ability of these models. We first investigate superficial cues in Argument Annotated Essays (AAE), a widely used dataset for argument mining, and show that there exist superficial cues for argumentative link identification, the subtask of argumentative structure identification, in AAE. We then propose a simple method to suppress models’ dependence on superficial cues without any manual annotation efforts. Our experiments demonstrate that the proposed method has a potential to improve the generalization ability of argumentative link identification models.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.