Abstract
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable.We also introduce LIMSSE, an explanation method inspired by LIME that is designed for NLP. We show empirically that LIMSSE, LRP and DeepLIFT are the mosteffective explanation methods and recommend them for explaining DNNs in NLP.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU-Projekte: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Fakultätsübergreifende Einrichtungen: | Centrum für Informations- und Sprachverarbeitung (CIS) |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme
000 Informatik, Informationswissenschaft, allgemeine Werke > 004 Informatik 400 Sprache > 400 Sprache 400 Sprache > 410 Linguistik |
URN: | urn:nbn:de:bvb:19-epub-61866-4 |
Ort: | Stroudsburg, PA |
Bemerkung: | ISBN 978-1-948087-32-2 |
Sprache: | Englisch |
Dokumenten ID: | 61866 |
Datum der Veröffentlichung auf Open Access LMU: | 13. Mai 2019, 13:57 |
Letzte Änderungen: | 04. Nov. 2020, 13:39 |