Logo Logo
Hilfe
Hilfe
Switch Language to English

Zhao, Mengjie; Mi, Fei; Wang, Yasheng; Li, Minglei; Jiang, Xin; Liu, Qun und Schütze, Hinrich (Juli 2022): LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework. NAACL 2022, Seattle, United States, July 2022. Carpuat,, Marine; de Marneffe, Marie-Catherine und Meza Ruiz, Ivan Vladimir (Hrsg.): In: Findings of the Association for Computational Linguistics: NAACL 2022, Stroudsburg, PA: Association for Computational Linguistics (ACL). S. 675-692 [PDF, 2MB]

Abstract

Vast efforts have been devoted to creating high-performance few-shot learners, i.e., large-scale pretrained language models (PLMs) that perform well with little downstream task training data. Training PLMs has incurred significant cost, but utilizing the few-shot learners is still challenging due to their enormous size. This work focuses on a crucial question: How to make effective use of these few-shot learners? We propose LMTurk, a novel approach that treats few-shotlearners as crowdsourcing workers. The rationale is that crowdsourcing workers are in fact few-shot learners: They are shown a few illustrative examples to learn about a task and then start annotating. LMTurk employs few-shot learners built upon PLMs as workers. We show that the resulting annotations can be utilized to train models that solve the task well and are small enough to be deployable in practical scenarios. Active learning is integrated into LMTurk to reduce the amount of queries made to PLMs, minimizing the computational cost of running PLM inference passes. Altogether, LMTurk is an important step towards making effective use of current PLMs.

Dokument bearbeiten Dokument bearbeiten