
Abstract
The behavior of deep neural networks (DNNs) is hard to understand. This makes it necessary to explore post hoc explanation methods. We conduct the first comprehensive evaluation of explanation methods for NLP. To this end, we design two novel evaluation paradigms that cover two important classes of NLP problems: small context and large context problems. Both paradigms require no manual annotation and are therefore broadly applicable.We also introduce LIMSSE, an explanation method inspired by LIME that is designed for NLP. We show empirically that LIMSSE, LRP and DeepLIFT are the mosteffective explanation methods and recommend them for explaining DNNs in NLP.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
EU Funded Grant Agreement Number: | 740516 |
EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
Research Centers: | Center for Information and Language Processing (CIS) |
Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 000 Computer science, information and general works > 004 Data processing computer science 400 Language > 400 Language 400 Language > 410 Linguistics |
URN: | urn:nbn:de:bvb:19-epub-61866-4 |
Place of Publication: | Stroudsburg, PA |
Annotation: | ISBN 978-1-948087-32-2 |
Language: | English |
Item ID: | 61866 |
Date Deposited: | 13. May 2019, 13:57 |
Last Modified: | 04. Nov 2020, 13:39 |