Few-Shot Fine-Tuning vs. In-Context Learning: A Fair Comparison and Evaluation

By Marius Mosbach et al.
Read the original document by opening this link in a new tab.

Table of Contents

1. Abstract
2. Introduction
3. Background
3.1 Fine-tuning
3.2 In-context learning
4. Results
4.1 In-domain Performance
4.2 Out-of-domain Performance
5. Model Selection Strategies
6. Conclusion
7. References

Summary

The document compares the task adaptation strategies of few-shot fine-tuning and in-context learning for pre-trained language models. It evaluates their in-domain and out-of-domain generalization using models of various sizes. The results show that both approaches exhibit strengths and limitations, with fine-tuned models demonstrating good OOD generalization as model size increases. The study emphasizes the importance of fair comparisons between adaptation methods using models of equal size.
×
This is where the content will go.