2:20 PM - 2:40 PM
[1T3-GS-6-05] Prompt Optimization for Training Generalizable Language Models
[[Online]]
Keywords:text generation, meta learning, bilevel optimization
Recently, instruction tuning has been attracting significant attention as a method for training generalizable language models (e.g., ChatGPT).
Although various prompts have been manually created for instruction tuning, it has not been clarified what kind of prompts are optimal for obtaining cross-task generalization ability.
This study presents \emph{instruction optimization}, which optimizes training prompts by leveraging bilevel optimization, and we clarify what kind of prompts are optimal for instruction tuning.
Experimental results demonstrate that instruction optimization enhances the diversity of prompts and improves the generalization performance in a zero-shot setting, whereas using the same examples rather than a variety of exemplars is more effective in a few-shot setting.
Although various prompts have been manually created for instruction tuning, it has not been clarified what kind of prompts are optimal for obtaining cross-task generalization ability.
This study presents \emph{instruction optimization}, which optimizes training prompts by leveraging bilevel optimization, and we clarify what kind of prompts are optimal for instruction tuning.
Experimental results demonstrate that instruction optimization enhances the diversity of prompts and improves the generalization performance in a zero-shot setting, whereas using the same examples rather than a variety of exemplars is more effective in a few-shot setting.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.