Logo Logo
Hilfe
Hilfe
Switch Language to English

Yaghoobzadeh, Yadollah; Adel, Heike und Schütze, Hinrich (April 2018): Corpus-Level Fine-Grained Entity Typing. In: Journal of Artificial intelligence Research, Bd. 61: S. 835-862 [PDF, 1MB]

[thumbnail of 11191-Article Text-20723-1-10-20180420.pdf]
Vorschau
Download (1MB)

Abstract

Extracting information about entities remains an important research area. This paper addresses the problem of corpus-level entity typing, i.e., inferring from a large corpus that an entity is a member of a class, such as “food” or “artist”. The application of entity typing we are interested in is knowledge base completion, specifically, to learn which classes an entity is a member of. We propose FIGMENT to tackle this problem. FIGMENT is embedding-based and combines (i) a global model that computes scores based on global information of an entity and (ii) a context model that first evaluates the individual occurrences of an entity and then aggregates the scores. Each of the two proposed models has specific properties. For the global model, learning highquality entity representations is crucial because it is the only source used for the predictions. Therefore, we introduce representations using the name and contexts of entities on the three levels of entity, word, and character. We show that each level provides complementary information and a multi-level representation performs best. For the context model, we need to use distant supervision since there are no context-level labels available for entities. Distantly supervised labels are nois and this harms the performance of models. Therefore, we introduce and apply new algorithms for noise mitigation using multi-instance learning. We show the effectiveness of our models on a large entity typing dataset built from Freebase.

Dokument bearbeiten Dokument bearbeiten