Abstract
We address the task of unsupervised Seman- tic Textual Similarity (STS) by ensembling di- verse pre-trained sentence encoders into sen- tence meta-embeddings. We apply, extend and evaluate different meta-embedding meth- ods from the word embedding literature at the sentence level, including dimensionality re- duction (Yin and Schu ̈tze, 2016), generalized Canonical Correlation Analysis (Rastogi et al., 2015) and cross-view auto-encoders (Bolle- gala and Bao, 2018). Our sentence meta- embeddings set a new unsupervised State of The Art (SoTA) on the STS Benchmark and on the STS12–STS16 datasets, with gains of be- tween 3.7% and 6.4% Pearson’s r over single- source systems.
Dokumententyp: | Konferenz |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU-Projekte: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Fakultätsübergreifende Einrichtungen: | Centrum für Informations- und Sprachverarbeitung (CIS) |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
400 Sprache > 410 Linguistik |
URN: | urn:nbn:de:bvb:19-epub-72194-7 |
Ort: | Stroudsburg, USA |
Bemerkung: | https://arxiv.org/abs/1911.03700 |
Sprache: | Englisch |
Dokumenten ID: | 72194 |
Datum der Veröffentlichung auf Open Access LMU: | 20. Mai 2020, 09:50 |
Letzte Änderungen: | 04. Nov. 2020, 13:53 |