[4Xin2-45] Conceptual Knowledge in the Pretrained Japanese BERT Model
Keywords:AI, NLP, transformer
In this paper, we undertake experiments to identify neurons that hold knowledge about concepts in the Feed-Forward Network (FFN) layers of a Japanese BERT model. Specifically, we identify important weight parameters (neurons) within the pre-trained Japanese BERT model for MLM tasks, and confirm their locations and significance through comparative experiments. Additionally, we compare our results with prior studies using a pre-trained English BERT model, demonstrating differences in experimental outcomes under specific conditions.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.