[1Win4-94] Analyzing Mathematical Misconceptions based on Large Language Models
Keywords:Large Language Model, Natural Language Processing
Error analysis of multiple-choice questions in mathematics education plays a crucial role in diagnosing learners' conceptual understanding. In recent years, while attempts have been made to automate and improve the efficiency of this analysis using large language models, challenges remain in accurately capturing the complexity of mathematical reasoning errors and generalizing to previously unseen types of misconceptions. We propose a method to predict learners' conceptual misunderstandings from incorrect answer choices using the Eedi dataset, which comprises 1,869 mathematics multiple-choice questions. The proposed method employs a two-stage approach combining efficient candidate retrieval and high-precision re-ranking. Furthermore, to address unknown misconceptions present in the test data, we introduce a post-processing method based on distribution estimation. In our evaluation experiments, the method achieved MAP@25 scores of 0.670 on the public leaderboard and 0.602 on the private leaderboard, demonstrating its effectiveness. The findings of this research provide valuable insights towards the practical implementation of mathematics education support systems using large language models.
Please log in with your participant account.
» Participant Log In