Instruction Tuning for Large Language Models: A Survey

By Shengyu Zhang et al
Published on March 12, 2024
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Methodology
2.1 Instruction Dataset Construction
2.2 Instruction Tuning
3 Datasets
3.1 Human-crafted Data
3.1.1 Natural Instructions
3.1.2 P3
3.1.3 xP3
3.1.4 Flan 2021
3.1.5 LIMA
3.1.6 Super-Natural Instructions
3.1.7 Dolly

Summary

This paper surveys research works in the field of instruction tuning (IT) for large language models (LLMs). IT involves further training LLMs on a dataset of (INSTRUCTION, OUTPUT) pairs to align with human instructions. The paper reviews the methodology, construction of IT datasets, training models, and applications to different modalities and domains. It discusses the benefits and challenges of IT, emphasizing the need for further research. The survey fills a gap in the literature by organizing the state of knowledge in this field.
×
This is where the content will go.