Abstract
Word embeddings are useful for a wide vari- ety of tasks, but they lack interpretability. By rotating word spaces, interpretable dimensions can be identified while preserving the informa- tion contained in the embeddings without any loss. In this work, we investigate three meth- ods for making word spaces interpretable by rotation: Densifier (Rothe et al., 2016), linear SVMs and DensRay, a new method we pro- pose. In contrast to Densifier, DensRay can be computed in closed form, is hyperparameter- free and thus more robust than Densifier. We evaluate the three methods on lexicon induc- tion and set-based word analogy. In addition we provide qualitative insights as to how inter- pretable word spaces can be used for removing gender bias from embeddings.
| Item Type: | Conference |
|---|---|
| EU Funded Grant Agreement Number: | 740516 |
| EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
| Research Centers: | Center for Information and Language Processing (CIS) |
| Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 410 Linguistics |
| URN: | urn:nbn:de:bvb:19-epub-72192-5 |
| ISBN: | 978-1-950737-90-1 |
| Place of Publication: | Stroudsburg, USA |
| Language: | English |
| Item ID: | 72192 |
| Date Deposited: | 20. May 2020 09:48 |
| Last Modified: | 04. Nov 2020 13:53 |
