9:00 AM - 9:20 AM
[3A1-GS-10-01] Dynamic Chain of Thought Correction Based on Correctness in Pre-Inference by LLMs
Keywords:AI, LLM
In-Context Learning (ICL) techniques, such as Zero-Shot and Few-Shot prompting, enable large language models (LLMs) to improve their performance on reasoning tasks without requiring fine-tuning. However, Zero-Shot prompting struggles with complex tasks, while Few-Shot prompting requires significant design effort and task-specific knowledge. This paper introduces a novel approach that overcomes these challenges in environments with limited correct answers. Our method has LLMs reason over the training dataset, analyzing both correct and incorrect answers to extract reasoning principles, which are then dynamically applied to improve inference during testing. We demonstrate its effectiveness on benchmark datasets such as MMLU-Pro, AQuA, and GSM8K, with up to 1% improvement in accuracy compared to Zero-Shot approaches.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.