Abstract
Question answering (QA) has recently shown impressive results for answering questions from customized domains. Yet, a common challenge is to adapt QA models to an unseen target domain. In this paper, we propose a novel self-supervised framework called QADA for QA domain adaptation. QADA introduces a novel data augmentation pipeline used to augment training QA samples. Different from existing methods, we enrich the samples via hidden space augmentation. For questions, we introduce multi-hop synonyms and sample augmented token embeddings with Dirichlet distributions. For contexts, we develop an augmentation method which learns to drop context spans via a custom attentive sampling strategy. Additionally, contrastive learning is integrated in the proposed self-supervised adaptation framework QADA. Unlike existing approaches, we generate pseudo labels and propose to train the model via a novel attention-based contrastive adaptation method. The attention weights are used to build informative features for discrepancy estimation that helps the QA model separate answers and generalize across source and target domains. To the best of our knowledge, our work is the first to leverage hidden space augmentation and attention-based contrastive adaptation for self-supervised domain adaptation in QA. Our evaluation shows that QADA achieves considerable improvements on multiple target datasets over state-of-the-art baselines in QA domain adaptation.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Keywords: | Computation and Language (cs.CL); Artificial Intelligence (cs.AI); FOS: Computer and information sciences; FOS: Computer and information sciences; Artificial Intelligence; AI, Künstliche Intelligenz; KI |
Fakultät: | Betriebswirtschaft > Institute of Artificial Intelligence (AI) in Management |
Themengebiete: | 000 Informatik, Informationswissenschaft, allgemeine Werke > 000 Informatik, Wissen, Systeme |
Sprache: | Englisch |
Dokumenten ID: | 94982 |
Datum der Veröffentlichung auf Open Access LMU: | 09. Mrz. 2023, 09:05 |
Letzte Änderungen: | 09. Mrz. 2023, 09:05 |