Abstract
Word embeddings typically represent differ- ent meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia an- notations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnos- tic tests for an embedding’s content: we probe word embeddings for semantic classes and an- alyze the embedding space by classifying em- beddings into semantic classes. Our main find- ings are: (i) Information about a sense is gen- erally represented well in a single-vector em- bedding – if the sense is frequent. (ii) A clas- sifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embed- dings, this does not have negative impact on an NLP application whose performance depends on frequent senses.
Item Type: | Conference |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Research Centers: | Center for Information and Language Processing (CIS) |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 410 Linguistics |
URN: | urn:nbn:de:bvb:19-epub-72190-4 |
ISBN: | 978-1-950737-48-2 |
Place of Publication: | Stroudsburg, USA |
Language: | English |
Item ID: | 72190 |
Date Deposited: | 20. May 2020, 09:25 |
Last Modified: | 04. Nov 2020, 13:53 |