Chain-Of-Thought Prompting Elicits Reasoning in Large Language Models

By Jason Wei et al.
Published on Jan. 10, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1. Abstract
2. Introduction
3. Chain-of-Thought Prompting
4. Arithmetic Reasoning
5. Experimental Setup
6. Results
7. Ablation Study

Summary

This paper explores how generating a chain of thought significantly improves the ability of large language models to perform complex reasoning. By providing chain-of-thought demonstrations via prompting, the study shows improved performance on arithmetic, commonsense, and symbolic reasoning tasks. The approach, called chain-of-thought prompting, allows models to decompose multi-step problems into intermediate steps, providing an interpretable view into the model's behavior. The study showcases the success of chain-of-thought prompting in various benchmarks, particularly with larger model scales. Results indicate that the method works best with models of over 100B parameters and excels in complex problems. The paper also compares chain-of-thought prompting to standard methods and prior best performance, demonstrating its effectiveness in achieving state-of-the-art results.
×
This is where the content will go.