On Empirical Comparisons of Optimizers for Deep Learning
By Dami C. et al
Read the original document by opening this link in a new tab.
Table of Contents
1. Abstract
2. Introduction
3. Background and Related Work
4. Experiments
Summary
The paper discusses the importance of optimizer selection in deep learning pipelines and the impact of hyperparameter tuning on optimizer comparisons. It highlights the inclusion relationships between optimizers and the sensitivity of comparisons to hyperparameter tuning protocols. The experiments aim to inform practitioners by allowing variation of all optimization hyperparameters for each optimizer. The study challenges the assumption of using the same hyperparameter search space for all optimizers and emphasizes the need for carefully tuning hyperparameters to maximize performance. The findings suggest that inclusion relationships between optimizers matter in practice and provide insights into the practical relevance of empirical comparisons in deep learning optimization.