Logo Logo
Hilfe
Hilfe
Switch Language to English

Zhao, Mengjie und Schütze, Hinrich (November 2021): Discrete and Soft Prompting for Multilingual Models. 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, November 2021. Moens, Marie-Francine; Huang, Xuanjing; Specia, Lucia und Yih, Scott Wen-tau (Hrsg.): In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Stroudsburg PA: Association for Computational Linguistics. [PDF, 383kB]

[thumbnail of 2021.emnlp-main.672.pdf]
Vorschau
Download (383kB)

Abstract

It has been shown for English that discrete and soft prompting perform strongly in fewshot learning with pretrained language models (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74% accuracy in crosslingual transfer, barely surpassing the majority baseline (33.33%). In contrast, discrete and soft prompting outperform finetuning, achieving 36.43% and 38.79%. We also demonstrate good performance of prompting with training data in multiple languages other than English.

Dokument bearbeiten Dokument bearbeiten