[3Xin2-30] Software Test Generation Incorporating System Specification Information through Large Language Models and Adoption Possibility Evaluation of the Generated Results
Keywords:Large Language Models, Software test, Automation
With the promotion of social application of IT systems, as DX exemplifies, there is a growing interest in improving software reliability. However, the diverse and vast number of cases to be tested and the large amount of system specification information that needs to be understood as preliminary knowledge poses significant challenges. Traditionally, these issues necessitated extensive work by human experts. In this study, we developed a software test generation model incorporating system specification information using the large language model, which has been rapidly developed in recent years. Our model achieved low-cost inference by utilizing open-source models instead of closed models such as GPT4 and efficiently selecting system specification information for the prompt. Additionally, the barrier to practical application can be reduced by applying a new evaluation metric that is similar to the sense of human experts. Experimental results using actual test case data show that our model generates test cases with high accuracy and at a lower cost than GPT4.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.