Abstract
Neural networks have greatly boosted performance in computer vision by learning powerful representations of input data. The drawback of end-to-end training for maximal overall performance are black-box models whose hidden representations are lacking interpretability: Since distributed coding is optimal for latent layers to improve their robustness, attributing meaning to parts of a hidden feature vector or to individual neurons is hindered. We formulate interpretation as a translation of hidden representations onto semantic concepts that are comprehensible to the user. The mapping between both domains has to be bijective so that semantic modifications in the target domain correctly alter the original representation. The proposed invertible interpretation network can be transparently applied on top of existing architectures with no need to modify or retrain them. Consequently, we translate an original representation to an equivalent yet interpretable one and backwards without affecting the expressiveness and performance of the original. The invertible interpretation network disentangles the hidden representation into separate, semantically meaningful concepts. Moreover, we present an efficient approach to define semantic concepts by only sketching two images and also an unsupervised strategy. Experimental evaluation demonstrates the wide applicability to interpretation of existing classification and image generation networks as well as to semantically guided image manipulation.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Geschichts- und Kunstwissenschaften > Department Kunstwissenschaften > Kunstgeschichte |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik
700 Künste und Unterhaltung > 700 Künste |
Ort: | New York |
Sprache: | Englisch |
Dokumenten ID: | 107298 |
Datum der Veröffentlichung auf Open Access LMU: | 04. Okt. 2023, 13:33 |
Letzte Änderungen: | 28. Mai 2024, 12:35 |