Abstract
Prompt-based approaches excel at few-shot learning. However, Perez et al. (2021) recently cast doubt on their performance as they had difficulty getting good results in a “true” few-shot setting in which prompts and hyperparameters cannot be tuned on a dev set. In view of this, we conduct an extensive study of Pet, a method that combines textual instructions with example-based finetuning. We show that, if correctly configured, Pet performs strongly in true few-shot settings without a dev set. Crucial for this strong performance is a number of design choices, including Pet’s ability to intelligently handle multiple prompts. We put our findings to a real-world test by running Pet on RAFT, a benchmark of tasks taken from realistic NLP applications for which no labeled dev or test sets are available. Pet achieves a new state of the art on RAFT and performs close to non-expert humans for 7 out of 11 tasks. These results demonstrate that prompt-based learners can successfully be applied in true few-shot settings and underpin our belief that learning from instructions will play an important role on the path towards human-like few-shot learning capabilities.
| Item Type: | Journal article |
|---|---|
| EU Funded Grant Agreement Number: | 740516 |
| EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
| Form of publication: | Publisher's Version |
| Research Centers: | Center for Information and Language Processing (CIS) |
| Subjects: | 400 Language > 400 Language 400 Language > 410 Linguistics |
| URN: | urn:nbn:de:bvb:19-epub-107438-2 |
| Language: | English |
| Item ID: | 107438 |
| Date Deposited: | 20. Oct 2023 07:32 |
| Last Modified: | 20. Oct 2023 07:32 |
