ActiveGLAE: A Benchmark for Deep Active Learning with Transformers

By L. Rauch et al.
Published on June 16, 2023
Read the original document by opening this link in a new tab.

Table of Contents

1 Introduction
2 Problem Setting
3 Current Practice of Evaluating Deep Active Learning
3.1 Data Set Selection
3.2 Model Training
3.3 Deep Active Learning Setting
Conclusion

Summary

The document discusses the challenges and practices of evaluating Deep Active Learning (DAL) with transformer-based language models. It introduces the ActiveGLAE benchmark aimed at facilitating a standardized evaluation protocol for DAL. The benchmark includes a range of NLP classification tasks and guidelines for realistic evaluation. The paper highlights the importance of data set selection, model training without a validation set, and key factors in the DAL setting such as query strategies and budget. It provides insights into current practices and proposes a systematic approach to address these challenges.
×
This is where the content will go.