Abstract
Deep active learning (DAL) seeks to reduce annotation costs by enabling the model to actively query instance annotations from which it expects to learn the most. Despite extensive research, there is currently no standardized evaluation protocol for transformer-based language models in the field of DAL. Diverse experimental settings lead to difficulties in comparing research and deriving recommendations for practitioners. To tackle this challenge, we propose the ActiveGLAE benchmark, a comprehensive collection of data sets and evaluation guidelines for assessing DAL. Our benchmark aims to facilitate and streamline the evaluation process of novel DAL strategies. Additionally, we provide an extensive overview of current practice in DAL with transformer-based language models. We identify three key challenges - data set selection, model training, and DAL settings - that pose difficulties in comparing query strategies. We establish baseline results through an extensive set of experiments as a reference point for evaluating future work. Based on our findings, we provide guidelines for researchers and practitioners.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Statistik |
Themengebiete: | 300 Sozialwissenschaften > 310 Statistiken
500 Naturwissenschaften und Mathematik > 510 Mathematik |
ISBN: | 978-3-031-43411-2 ; 978-3-031-43412-9 |
Ort: | Cham |
Bemerkung: | Teil von: Lecture Notes in Artificial Intelligence (LNAI) ; (LNCS, volume 14169) |
Sprache: | Englisch |
Dokumenten ID: | 121958 |
Datum der Veröffentlichung auf Open Access LMU: | 04. Nov. 2024 14:02 |
Letzte Änderungen: | 04. Nov. 2024 14:02 |