Logo Logo
Hilfe
Hilfe
Switch Language to English

Alberti, Silas; Dern, Niclas; Thesing, Laura und Kutyniok, Gitta ORCID logoORCID: https://orcid.org/0000-0001-9738-2487 (2023): Sumformer: Universal Approximation for Efficient Transformers. 2nd Annual Workshop on Topology, Algebra and Geometry in Machine Learning (TAG-ML), Honolulu, HI, USA, 28. Juli 2023. Doster, Timothy; Emerson, Tegan; Kvinge, Henry; Miolane, Nina; Papillon, Mathilde; Rieck, Bastian und Sanborn, Sophia (Hrsg.): In: Topological, Algebraic and Geometric Learning Workshops 2023, 28 July 2023, Proceedings of Machine Learning Research (PMLR) Bd. 221 ML Research Press. S. 72-86

Volltext auf 'Open Access LMU' nicht verfügbar.

Abstract

Natural language processing (NLP) made an impressive jump with the introduction of Transformers. ChatGPT is one of the most famous examples, changing the perception of the possibilities of AI even outside the research community. However, besides the impressive performance, the quadratic time and space complexity of Transformers with respect to sequence length pose significant limitations for handling long sequences. While efficient Transformer architectures like Linformer and Performer with linear complexity have emerged as promising solutions, their theoretical understanding remains limited. In this paper, we introduce Sumformer, a novel and simple architecture capable of universally approximating equivariant sequence-to-sequence functions. We use Sumformer to give the first universal approximation results for Linformer and Performer. Moreover, we derive a new proof for Transformers, showing that just one attention layer is sufficient for universal approximation.

Dokument bearbeiten Dokument bearbeiten