ORCID: https://orcid.org/0000-0001-9738-2487
(2023):
Sumformer: Universal Approximation for Efficient Transformers.
2nd Annual Workshop on Topology, Algebra and Geometry in Machine Learning (TAG-ML), Honolulu, HI, USA, 28. Juli 2023.
Doster, Timothy; Emerson, Tegan; Kvinge, Henry; Miolane, Nina; Papillon, Mathilde; Rieck, Bastian und Sanborn, Sophia (Hrsg.):
In: Topological, Algebraic and Geometric Learning Workshops 2023, 28 July 2023, Proceedings of Machine Learning Research (PMLR)
Bd. 221
ML Research Press. S. 72-86
Abstract
Natural language processing (NLP) made an impressive jump with the introduction of Transformers. ChatGPT is one of the most famous examples, changing the perception of the possibilities of AI even outside the research community. However, besides the impressive performance, the quadratic time and space complexity of Transformers with respect to sequence length pose significant limitations for handling long sequences. While efficient Transformer architectures like Linformer and Performer with linear complexity have emerged as promising solutions, their theoretical understanding remains limited. In this paper, we introduce Sumformer, a novel and simple architecture capable of universally approximating equivariant sequence-to-sequence functions. We use Sumformer to give the first universal approximation results for Linformer and Performer. Moreover, we derive a new proof for Transformers, showing that just one attention layer is sufficient for universal approximation.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Mathematik |
Themengebiete: | 500 Naturwissenschaften und Mathematik > 510 Mathematik |
ISSN: | 2640-3498 |
Sprache: | Englisch |
Dokumenten ID: | 123830 |
Datum der Veröffentlichung auf Open Access LMU: | 25. Feb. 2025 15:56 |
Letzte Änderungen: | 25. Feb. 2025 15:56 |