Logo Logo
Hilfe
Hilfe
Switch Language to English

Elazar, Yanai; Kassner, Nora; Ravfogel, Shauli; Ravichander, Abhilasha; Hovy, Eduard; Schütze, Hinrich und Goldberg, Yoav (2021): Measuring and Improving Consistency in Pretrained Language Models. In: Transactions of the Association for Computational Linguistics, Bd. 9: S. 1012-1031

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Consistency of a model-that is, the invariance of its behavior under meaning-preserving alternations in its input-is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create PARAREL, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for 38 relations. Using PARAREL, we show that the consistency of all PLMs we experiment with is poor-though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge robustly. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness.(1)

Dokument bearbeiten Dokument bearbeiten