JSAI2024

Presentation information

Poster Session

Poster session » Poster session

[3Xin2] Poster session 1

Thu. May 30, 2024 11:00 AM - 12:40 PM Room X (Event hall 1)

[3Xin2-72] Effectiveness of Knowledge Augmentation Prompts from Commonsense Knowledge Graphs

〇Kei Okada1, Rafal Rzepka2, Kenji Araki2 (1.Graduate School of Information Science and Technology, Hokkaido University, 2.Faculty of Information Science and Technology, Hokkaido University)

Keywords:Commonsense knowledge graph, LLM

Before large language models (LLMs), models such as BERT were combined with external knowledge such as knowledge graphs to improve performance on various tasks. However, LLMs can achieve similar performance without the need to use external knowledge, by changing the input (prompt). In this study, we investigated the effectiveness of knowledge augmentation with knowledge graphs for LLMs. In the Japanese commonsense reasoning task JCommonsenseQA, we added knowledge from the commonsense knowledge graph ConceptNet to the input and tested five language models in mostly zero-shots settings. As validation data, we randomly selected 100 questions from the JCommonsenseQA train set. Then, we converted the ConceptNet knowledge into natural sentences, and extracted the top 10 most similar knowledge for each question. These 10 knowledge were added to the input prompt, and given to LLMs. Evaluation was done using exact match, and accuracy was calculated. Our experiment showed that the accuracy decreased or remained the same for all models when knowledge augmentation was applied. This suggests that knowledge augmentation prompt using commonsense knowledge graphs may not be effective in LLM.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password