
Abstract
We address the task of unsupervised Seman- tic Textual Similarity (STS) by ensembling di- verse pre-trained sentence encoders into sen- tence meta-embeddings. We apply, extend and evaluate different meta-embedding meth- ods from the word embedding literature at the sentence level, including dimensionality re- duction (Yin and Schu ̈tze, 2016), generalized Canonical Correlation Analysis (Rastogi et al., 2015) and cross-view auto-encoders (Bolle- gala and Bao, 2018). Our sentence meta- embeddings set a new unsupervised State of The Art (SoTA) on the STS Benchmark and on the STS12–STS16 datasets, with gains of be- tween 3.7% and 6.4% Pearson’s r over single- source systems.
Item Type: | Conference |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Research Centers: | Center for Information and Language Processing (CIS) |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 410 Linguistics |
URN: | urn:nbn:de:bvb:19-epub-72194-7 |
Place of Publication: | Stroudsburg, USA |
Annotation: | https://arxiv.org/abs/1911.03700 |
Language: | English |
Item ID: | 72194 |
Date Deposited: | 20. May 2020, 09:50 |
Last Modified: | 04. Nov 2020, 13:53 |