What In-Context Learning Learns In-Context: Disentangling Task Recognition and Task Learning
By Jane Pan et al.
Read the original document by opening this link in a new tab.
Table of Contents
Abstract
1 Introduction
2 Task Recognition and Task Learning
3 Experimental Setup
4 Results
4.1 Main Results
4.2 Further Analysis
5 Related Work
6 Conclusion
Summary
Large language models (LLMs) exploit in-context learning (ICL) to solve tasks with only a few demonstrations. Some suggest that LLMs only recall already learned concepts from pre-training, while others hint that ICL performs implicit learning over demonstrations. We characterize two ways through which ICL leverages demonstrations. Task recognition (TR) captures the extent to which LLMs can recognize a task through demonstrations, even without ground-truth labels. Task learning (TL) is the ability to capture new input-label mappings unseen in pre-training. Our findings unravel two different forces behind ICL, advocating for discriminating them in future ICL research due to their distinct nature. We conduct experiments with various datasets and LLM families to disentangle TR and TL in ICL. The results show that models can achieve non-trivial performance with only TR, while TL improves with larger models and more demonstrations. We observe how TR and TL manifest under different conditions, providing a better understanding of ICL behaviors.