
Abstract
It has been shown for English that discrete and soft prompting perform strongly in fewshot learning with pretrained language models (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely surpassing the majority baseline (33.33%). In contrast, discrete and soft prompting outperform finetuning, achieving 36.43% and 38.79%. We also demonstrate good performance of prompting with training data in multiple languages other than English.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Research Centers: | Center for Information and Language Processing (CIS) |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 400 Language 400 Language > 410 Linguistics |
URN: | urn:nbn:de:bvb:19-epub-92185-0 |
Place of Publication: | Stroudsburg PA |
Language: | English |
Item ID: | 92185 |
Date Deposited: | 27. May 2022, 08:17 |
Last Modified: | 27. May 2022, 08:17 |