PaLM: Scaling Language Modeling with Pathways

By Aakanksha Chowdhery et al
Published on Oct. 5, 2022
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Model Architecture
2.1 Model Scale Hyperparameters
2.2 Model Card
3 Training Dataset
4 Training Infrastructure
4.1 Training Efficiency
5 Training Setup
5.1 Training Instability
6 Evaluation
6.1 English NLP tasks
6.2 BIG-bench
6.3 Reasoning
6.4 Code Tasks
6.5 Translation
6.6 Multilingual Natural Language Generation
6.7 Multilingual Question Answering
6.8 Analysis
7 Memorization
8 Dataset Contamination
9 Exploring Explanations
10 Representational Bias Analysis
10.1 Distributional bias in social groups
10.2 Toxicity in open-ended generation
10.3 Limitations
11 Ethical Considerations
12 Related Work
13 Open Questions in Scaling
14 Conclusion
15 Acknowledgments
A Contributions
B Compute Usage and Environmental Impact
C Dataset Analysis
D Datasheet
E Model Card
F Training for longer
G Sample Model Outputs
H Additional Results
H.1 English NLP tasks on smaller models
H.2 Additional BIG-bench results
H.3 Additional Multilingual NLG results

Summary

Large language models have been shown to achieve remarkable performance across a variety of natural language tasks using few-shot learning, which drastically reduces the number of task-specific training examples needed to adapt the model to a particular application. To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model (PaLM). We trained PaLM on 6144 TPU v4 chips using Pathways, a new ML system which enables highly efficient training across multiple TPU Pods. We demonstrate continued benefits of scaling by achieving state-of-the-art few-shot learning results on hundreds of language understanding and generation benchmarks. On a number of these tasks, PaLM 540B achieves breakthrough performance, outperforming the netuned state-of-the-art on a suite of multi-step reasoning tasks, and outperforming average human performance on the recently released BIG-bench benchmark. A significant number of BIG-bench tasks showed discontinuous improvements from model scale, meaning that performance steeply increased as we scaled to our largest model. PaLM also has strong capabilities in multilingual tasks and source code generation, which we demonstrate on a wide array of benchmarks. We additionally provide a comprehensive analysis on bias and toxicity, and study the extent of training data memorization with respect to model scale. Finally, we discuss the ethical considerations related to large language models and discuss potential mitigation strategies.
×
This is where the content will go.