[3Rin4-39] Further Pretraining BERT for Causality Existence Classification in Financial Domain
Keywords:Causal Extraction, Natural Language Processing, Language Resources
In this research, J-FinBERT, which is further pretrained from BERT model by using Japanese financial corpus, is proposed to classify the causality existence in the financial texts. We verify the effectiveness of further pretraining on prediction accuracy and robustness to fewer samples and noisy labels in our task. Through our experiments, we find that further pretraining in specific domain corpus is effective for improving prediction accuracy and robustness for small dataset in our task, but not for noisy labels.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.