Abstract
Transformer-based models for transfer learning have the potential to achieve high prediction accuracies on text-based supervised learning tasks with relatively few training data instances. These models are thus likely to benefit social scientists that seek to have as accurate as possible text-based measures, but only have limited resources for annotating training data. To enable social scientists to leverage these potential benefits for their research, this article explains how these methods work, why they might be advantageous, and what their limitations are. Additionally, three Transformer-based models for transfer learning, BERT, RoBERTa, and the Longformer, are compared to conventional machine learning algorithms on three applications. Across all evaluated tasks, textual styles, and training data set sizes, the conventional models are consistently outperformed by transfer learning with Transformers, thereby demonstrating the benefits these models can bring to text-based social science research.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Sozialwissenschaften > Geschwister-Scholl-Institut für Politikwissenschaft |
Themengebiete: | 300 Sozialwissenschaften > 320 Politik |
ISSN: | 0049-1241 |
Sprache: | Englisch |
Dokumenten ID: | 110950 |
Datum der Veröffentlichung auf Open Access LMU: | 02. Apr. 2024, 07:22 |
Letzte Änderungen: | 02. Apr. 2024, 07:22 |