Abstract
Word embeddings typically represent differ- ent meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia an- notations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnos- tic tests for an embedding’s content: we probe word embeddings for semantic classes and an- alyze the embedding space by classifying em- beddings into semantic classes. Our main find- ings are: (i) Information about a sense is gen- erally represented well in a single-vector em- bedding – if the sense is frequent. (ii) A clas- sifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embed- dings, this does not have negative impact on an NLP application whose performance depends on frequent senses.
Dokumententyp: | Konferenz |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU-Projekte: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Fakultätsübergreifende Einrichtungen: | Centrum für Informations- und Sprachverarbeitung (CIS) |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
400 Sprache > 410 Linguistik |
URN: | urn:nbn:de:bvb:19-epub-72190-4 |
ISBN: | 978-1-950737-48-2 |
Ort: | Stroudsburg, USA |
Sprache: | Englisch |
Dokumenten ID: | 72190 |
Datum der Veröffentlichung auf Open Access LMU: | 20. Mai 2020, 09:25 |
Letzte Änderungen: | 04. Nov. 2020, 13:53 |