When Deep Learners Change Their Mind: Learning Dynamics for Active Learning

By Javad Zolfaghari Bengar et al.
Published on July 30, 2021
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1 Introduction
2 Related work
3 Active learning for image classification
3.1 Label-dispersion acquisition function
3.2 Informativeness Analysis
4 Experimental Results

Summary

Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. This paper proposes a new informativeness-based active learning method based on the learning dynamics of a neural network. The method tracks the label assignment of the unlabeled data pool during the training of the algorithm and captures the learning dynamics with a metric called label-dispersion. It shows promising results on benchmark datasets and outperforms state-of-the-art methods in active learning. The paper also discusses related work in active learning strategies and presents experimental results on CIFAR10 and CIFAR100 datasets. The proposed acquisition function, label-dispersion, is compared with other informative and representative-based approaches such as BALD, Margin Sampling, CoreSet, VAAL, and an Oracle method. The results indicate that label-dispersion selects misclassified samples with high uncertainty, which can potentially improve the model performance when labeled.
×
This is where the content will go.