Llama 2: Open Foundation and Fine-Tuned Chat Models

By Hugo Touvron et al
Published on July 19, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Pretraining
2.1 Pretraining Data
2.2 Training Details
2.3 Llama 2 Pretrained Model Evaluation
3 Fine-tuning
3.1 Supervised Fine-Tuning (SFT)
3.2 Reinforcement Learning with Human Feedback (RLHF)
3.3 System Message for Multi-Turn Consistency
3.4 RLHF Results
4 Safety
4.1 Safety in Pretraining
4.2 Safety Fine-Tuning
4.3 Red Teaming
4.4 Safety Evaluation of Llama 2-Chat
5 Discussion
5.1 Learnings and Observations
5.2 Limitations and Ethical Considerations
5.3 Responsible Release Strategy
6 Related Work
7 Conclusion
A Appendix
A.1 Contributions
A.2 Additional Details for Pretraining
A.3 Additional Details for Fine-tuning
A.4 Additional Details for Safety
A.5 Data Annotation
A.6 Dataset Contamination
A.7 Model Card
Figure 1: Helpfulness human evaluation results for Llama 2-Chat compared to other open-source and closed-source models.

Summary

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our humane evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
×
This is where the content will go.