Supervised Contrastive Learning

By Prannay Khosla et al
Published on March 10, 2021
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Related Work
3 Method
3.1 Representation Learning Framework
3.2 Contrastive Loss Functions
3.2.1 Self-Supervised Contrastive Loss
3.2.2 Supervised Contrastive Losses

Summary

Abstract: Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. In this work, the authors extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing them to effectively leverage label information. They analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. The results show benefits for robustness to natural corruptions and stability to hyperparameter settings. The proposed loss achieves top-1 accuracy on the ImageNet dataset and outperforms cross-entropy on various datasets. The paper discusses the methodology and results in detail, showing the implications of supervised contrastive learning in improving classification accuracy.
×
This is where the content will go.