2:50 PM - 3:10 PM
[2E4-GS-6-05] Analyzing Neurons Engaged in Multi-step Reasoning in Pre-trained Language Models
Keywords:language model, chain-of-thought, multi-step reasoning
Prompts have attracted attention as a way to exploit the performance of pre-trained language models, one of which is the chain-of-thought prompt. Chain-of-thought prompts are prompts that encourage the explicit expression of intermediate thoughts in order to derive a final answer, and have attracted attention for their ability to improve multi-stage reasoning. On the other hand, it remains unclear how chain-of-thought prompts affect models and enable multi-step reasoning.
In this paper, we examine how neurons in models are internally influenced in multi-step reasoning tasks, against the background of existing studies that interpret task performance based on the activation of neurons in language models. The results revealed that there are neurons that are commonly activated in multiple chain-of-thought prompts in multi-step reasoning. We also found that suppressing the activation of these neurons worsened reasoning performance. These results have implications for the mechanisms by which models acquire reasoning ability.
In this paper, we examine how neurons in models are internally influenced in multi-step reasoning tasks, against the background of existing studies that interpret task performance based on the activation of neurons in language models. The results revealed that there are neurons that are commonly activated in multiple chain-of-thought prompts in multi-step reasoning. We also found that suppressing the activation of these neurons worsened reasoning performance. These results have implications for the mechanisms by which models acquire reasoning ability.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.