10:20 AM - 10:40 AM
[3P1-OS-46a-04] Assessing Logical Inference Capabilities of Large Language Models Through RDF Schema Entailment Rules
Keywords:RDF Schema entailment rules, counterfactual knowledge, large language models, ontology, logical inference capability
Large language models (LLMs) have recently demonstrated remarkable performance across various language tasks. However, they continue to face significant challenges in logical inference, often depending on pre-trained knowledge rather than engaging in genuine inference processes. Furthermore, their ability to perform inference tasks within ontology-based frameworks remains underexamined. This study focuses on RDF Schema entailment rules, leveraging two types of knowledge datasets: real-world datasets constructed from Linked Open Data and counterfactual datasets created by systematically modifying real-world knowledge from multiple perspectives. We propose a novel evaluation methodology to assess the inference capabilities of LLMs using these datasets. In the experiments, LLMs were provided with prompts containing rules and knowledge to generate inference outputs. These outputs were evaluated using real-world and counterfactual knowledge based on precision, recall, and F1 scores. The findings revealed inference failures in rare knowledge structures and a reliance on resource name patterns, underscoring the limitations of LLMs in inference with RDF Schema entailment rules.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.