[3Rin4-17] Generalizing Argumentative Link Identification Model by Reducing Dependence on Superficial Cues
Keywords:natural language processing, pre-trained language model, RoBERTa, BERT, argument mining
Large-scale pretrained language models are shown to be effective for identifying argumentative structures of texts as well as a wide range of natural language processing tasks. However, recent studies show that these models exploit dataset-specific biases (henceforth, superficial cues) for prediction, and that suppressing it could further improve the generalization ability of these models. We first investigate superficial cues in Argument Annotated Essays (AAE), a widely used dataset for argument mining, and show that there exist superficial cues for argumentative link identification, the subtask of argumentative structure identification, in AAE. We then propose a simple method to suppress models’ dependence on superficial cues without any manual annotation efforts. Our experiments demonstrate that the proposed method has a potential to improve the generalization ability of argumentative link identification models.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.