Symbol Tuning Improves In-Context Learning in Language Models

By Jerry Wei et al
Published on Jan. 2, 2024
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Symbol Tuning
3. Experimental Setup
4. Symbol-Tuned Models Are Better In-Context Learners

Summary

The paper presents symbol tuning, a finetuning procedure that improves the ability of language models to reason with and learn from input-label mappings presented in-context. Symbol tuning leverages in-context exemplars where natural language labels are replaced with arbitrary symbols. The study demonstrates the effectiveness of symbol tuning on various in-context learning tasks, showing performance gains in tasks without instructions or relevant labels. Symbol-tuned models also excel in algorithmic reasoning tasks, with notable improvements in List Functions and Simple Turing Concepts benchmarks. The paper highlights the simplicity and efficiency of symbol tuning, showcasing its potential to enhance language models' reasoning capabilities over arbitrary symbols in-context.
×
This is where the content will go.