JSAI2025

Presentation information

Poster Session

Poster session » Poster Session

[3Win5] Poster session 3

Thu. May 29, 2025 3:30 PM - 5:30 PM Room W (Event hall D-E)

[3Win5-105] A Decision-Table-Based Evaluation for Quantifying LLM Code Understanding and Examination of Its Applicability

〇Takafumi Sakura1, Ryo Soga1, Hideyuki Kanuka1 (1.Hitachi, Ltd.)

Keywords:LLM, Code Understanding, Decision Table

In tasks such as code generation and bug fixing using large language models (LLMs), it is crucial for the models to accurately understand and verify the generated codes. However, most existing evaluation methods rely on limited inputs and execution sequences prepared by humans, thereby failing to measure a model’s ability to comprehensively design input conditions. In this study, we propose an evaluation method that leverages decision tables, widely used in software development, to assess LLMs’ control-flow understanding and input-condition coverage. Experimental results show that, while the LLM demonstrates high accuracy for small-scale functions, larger-scale functions exhibit omissions and errors, revealing limitations in the model’s capabilities. Future work will involve applying this approach to a broader set of programs to identify the limiting factors of LLMs and explore guidelines for their improvement.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password