ORCID: https://orcid.org/0000-0001-9738-2487
(2023):
Expressivity of Deep Neural Networks.
In: Grohs, Philipp und Kutyniok, Gitta (eds.) :
Mathematical Aspects of Deep Learning. Cambridge: Cambridge University Press. pp. 149-199
Abstract
In this chapter, we give a comprehensive overview of the large variety of approximation results for neural networks. Approximation rates for classical function spaces as well as the benefits of deep neural networks over shallow ones for specifically structured function classes are discussed. While the main body of existing results is for general feedforward architectures, we also review approximation results for convolutional, residual and recurrent neural networks.
| Item Type: | Book Section |
|---|---|
| Faculties: | Mathematics, Computer Science and Statistics > Mathematics Mathematics, Computer Science and Statistics > Mathematics > Bavarian Chair for Mathematical Foundations of Artificial Intelligence |
| Research Centers: | Center for Advanced Studies (CAS) |
| Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 500 Science > 510 Mathematics |
| ISSN: | 978-1-316-51678-2 |
| Place of Publication: | Cambridge |
| Language: | English |
| Item ID: | 121709 |
| Date Deposited: | 08. Oct 2024 11:11 |
| Last Modified: | 20. May 2025 11:07 |
