Abstract
In this work, we focus on the task of open-type relation argument extraction (ORAE): given a corpus, a query entity Q, and a knowledge base relation (e.g., “Q authored notable work with title X”), the model has to extract an argument of non-standard entity type (entities that cannot be extracted by a standard named entity tagger, for example, X: the title of a book or a work of art) from the corpus. We develop and compare a wide range of neural models for this task yielding large improvements over a strong baseline obtained with a neural question answering system. The impact of different sentence encoding architectures and answer extraction methods is systematically compared. An encoder based on gated recurrent units combined with a conditional random fields tagger yields the best results. We release a data set to train and evaluate ORAE, based on Wikidata and obtained by distant supervision.
| Item Type: | Journal article |
|---|---|
| Research Centers: | Center for Information and Language Processing (CIS) |
| Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 400 Language |
| URN: | urn:nbn:de:bvb:19-epub-68930-1 |
| Language: | English |
| Item ID: | 68930 |
| Date Deposited: | 25. Sep 2019 07:29 |
| Last Modified: | 04. Nov 2020 13:51 |

