Logo Logo
Hilfe
Hilfe
Switch Language to English

Frey, Christian M. M. ORCID logoORCID: https://orcid.org/0000-0003-2458-6651; Ma, Yunpu ORCID logoORCID: https://orcid.org/0000-0001-6112-8794 und Schubert, Matthias ORCID logoORCID: https://orcid.org/0000-0002-6566-6343 (2023): SEA: Graph Shell Attention in Graph Neural Networks. European Conference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022. Amini, Massih-Reza; Canu, Stéphane; Fischer, Asja; Guns, Tias; Novak, Petra Kralj und Tsoumakas, Grigorios (Hrsg.): In: Machine Learning and Knowledge Discovery in Databases, Bd. 13714, 326-343 Cham: Springer. S. 326-343

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

A common problem in Graph Neural Networks (GNNs) is known as over-smoothing. By increasing the number of iterations within the message-passing of GNNs, the nodes’ representations of the input graph align and become indiscernible. The latest models employing attention mechanisms with Graph Transformer Layers (GTLs) are still restricted to the layer-wise computational workflow of a GNN that are not beyond preventing such effects. In our work, we relax the GNN architecture by means of implementing a routing heuristic. Specifically, the nodes’ representations are routed to dedicated experts. Each expert calculates the representations according to their respective GNN workflow. The definitions of distinguishable GNNs result from k-localized views starting from the central node. We call this procedure Graph Sh ell Attention (SEA), where experts process different subgraphs in a transformer-motivated fashion. Intuitively, by increasing the number of experts, the models gain in expressiveness such that a node’s representation is solely based on nodes that are located within the receptive field of an expert. We evaluate our architecture on various benchmark datasets showing competitive results while drastically reducing the number of parameters compared to state-of-the-art models.

Dokument bearbeiten Dokument bearbeiten