Abstract
Pretraining deep language models has led to large performance gains in NLP. Despite this success, Schick and Schu ̈tze (2020) recently showed that these models struggle to under- stand rare words. For static word embeddings, this problem has been addressed by separately learning representations for rare words. In this work, we transfer this idea to pretrained language models: We introduce BERTRAM, a powerful architecture based on BERT that is capable of inferring high-quality embeddings for rare words that are suitable as input rep- resentations for deep language models. This is achieved by enabling the surface form and con- texts of a word to interact with each other in a deep architecture. Integrating BERTRAM into BERT leads to large performance increases due to improved representations of rare and medium frequency words on both a rare word probing task and three downstream tasks.
| Dokumententyp: | Konferenzbeitrag (Paper) |
|---|---|
| EU Funded Grant Agreement Number: | 740516 |
| EU-Projekte: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
| Fakultätsübergreifende Einrichtungen: | Centrum für Informations- und Sprachverarbeitung (CIS) |
| Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
400 Sprache > 410 Linguistik |
| URN: | urn:nbn:de:bvb:19-epub-72196-8 |
| Sprache: | Englisch |
| Dokumenten ID: | 72196 |
| Datum der Veröffentlichung auf Open Access LMU: | 20. Mai 2020 09:45 |
| Letzte Änderungen: | 04. Nov. 2020 13:53 |

