JSAI2024

Presentation information

Poster Session

Poster session » Poster session

[4Xin2] Poster session 2

Fri. May 31, 2024 12:00 PM - 1:40 PM Room X (Event hall 1)

[4Xin2-43] A Comparative Analysis of Instruction Prompt Formats on Code Generation Task

〇Waka Ito1, Miyu Sato1, Shiho Takano1, Kimio Kuramitsu1 (1.Japan Women's University )

Keywords:Large Language Model, Code Generation, prompt

In the development of Large Language Models for Code (Code LLM), it has been found that instruction tuning is effective in enhancing the performance of Code LLM. Instruction tuning is a method that improves generalization performance by additional learning of instructions. However, there is a variety of opinions on what form of instruction is optimal, and it has not been clarified. The purpose of this study is to investigate the impact of different instruction formats on code generation performance in order to enhance the effects of instruction tuning for Code LLM. In particular, we focused on the output formats used for code extraction and conducted experiments. We also visualized the experimental results. The results revealed the performance differences in code generation by the models due to different output formats, and it was clarified that the Markdown format was the most versatile. Moreover, it was revealed that specifying an output format resulted in a higher accuracy rate than not specifying an output format.

Authentication for paper PDF access

A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password