Deep Active Learning with Contrastive Learning Under Realistic Data Pool Assumptions

By J. Kim et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
Introduction
Unlabeled Data Pool in the Wild
Active Learning via Contrastive Learning
Acquisition strategy on the feature space
Related Work

Summary

This paper discusses the challenges and solutions in active learning with deep neural networks under realistic data pool assumptions. The proposed method leverages contrastive learning to train a representation model with both labeled and unlabeled data pools. An acquisition strategy based on the learned feature space is introduced to select informative samples for labeling. The study presents new benchmarks, MixMNIST and MixCIFAR60, that include in-distribution, ambiguous, and out-of-distribution samples. Experimental results show that the proposed method reduces annotation costs while maintaining performance. The paper contributes to addressing the necessity of considering diverse unlabeled data pools in active learning research.
×
This is where the content will go.