Robust Contrastive Active Learning with Feature-guided Query Strategies

By Ranganath Krishnan et al
Read the original document by opening this link in a new tab.

Table of Contents

1. Introduction
2. Background and Problem setup
3. Proposed Method
4. Experiments and Results

Summary

The paper introduces supervised contrastive active learning (SCAL) with efficient query strategies based on feature similarity and principal component analysis based feature-reconstruction error. The proposed method achieves state-of-the-art accuracy, model calibration, and reduces sampling bias in active learning setups for both balanced and imbalanced datasets in image classification tasks. The authors evaluate the robustness of the model to distributional shift derived from different query strategies in active learning settings. They propose two novel query strategies based on feature-similarity and PCA-based feature-reconstruction error scoring functions, harnessed with supervised contrastive active learning. Extensive experiments show that their approach outperforms existing methods in terms of model robustness, calibration, and accuracy with fewer labeled samples. The paper discusses the importance of active learning in scenarios where data annotation is expensive and time-consuming. It addresses challenges related to dataset shifts, imbalance in datasets, and sampling bias in deep neural network training. The proposed SCAL method leverages contrastive loss to select informative and diverse samples efficiently. The experimental results on image classification tasks demonstrate the effectiveness of the proposed method compared to state-of-the-art active learning techniques.
×
This is where the content will go.