Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

By Sewon Min et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Related Work
3. Experimental Setup
4. Ground Truth Matters Little
4.1 Gold labels vs. random labels
4.2 Ablations
5. Why does In-Context Learning work?

Summary

Large language models are able to learn in-context by conditioning on input-label pairs (demonstrations) without needing ground truth labels. The study showed that replacing gold labels with random labels only marginally impacted model performance. Varying the correctness of labels in demonstrations and the number of examples did not significantly affect performance. The study also explored the use of manual templates with similar results.
×
This is where the content will go.