JSAI2025

Presentation information

Organized Session

Organized Session » OS-32

[3L6-OS-32] OS-32

Thu. May 29, 2025 5:40 PM - 7:20 PM Room L (Room 1007)

オーガナイザ:高槻 瞭大(AIアライメントネットワーク/東京大学),峰岸 剛基(東京大学),宮西 洋輔(サイバーエージェント/北陸先端科学技術大学院大学),高木 優(国立情報学研究所)

6:00 PM - 6:20 PM

[3L6-OS-32-02] In-Context Meta Learning Induces Multi-Phase Circuit Emergence

〇Gouki Minegishi1, Hiroki Furuta1, Shohei Taniguchi1, Yusuke Iwasawa1, Yutaka Matsuo1 (1. University of Tokyo)

Keywords:In Context Learning, circuits

Transformer-based language models exhibit In-Context Learning (ICL), where predictions are made adaptively based on context. While prior work links induction heads to ICL through phase transitions, this can only account for ICL when the answer is included within the context. However, an important property of practical ICL in large language models is the ability to meta-learn how to solve tasks from context, rather than just copying answers from context; how such an ability is obtained during training is largely unexplored. In this paper, we experimentally clarify how such meta-learning ability is acquired by analyzing the dynamics of the model’s circuit during training. Specifically, we extend the copy task from previous research into an In-Context Meta Learning setting, where models must infer a task from examples to answer queries. Interestingly, in this setting, we find that there are multiple phases in the process of acquiring such abilities, and that a unique circuit emerges in each phase, contrasting with the single-phase transition in induction heads. The emergence of such circuits can be related to several phenomena known in large language models, and our analysis lead to a deeper understanding of the source of the transformer’s ICL ability.

Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.

Password