Abstract
Character-based models become more and more popular for different natural language processing task, especially due to the success of neural networks. They provide the possibility of directly model text sequences without the need of tokenization and, therefore, enhance the traditional preprocessing pipeline. This paper provides an overview of character-based models for a variety of natural language processing tasks. We group existing work in three categories: tokenization-based approaches, bag-of-n-gram models and end-to-end models. For each category, we present prominent examples of studies with a particular focus on recent character-based deep learning work.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Sprach- und Literaturwissenschaften > Department 2 |
Themengebiete: | 400 Sprache > 400 Sprache |
ISSN: | 0302-9743 |
Sprache: | Englisch |
Dokumenten ID: | 66193 |
Datum der Veröffentlichung auf Open Access LMU: | 19. Jul. 2019, 12:19 |
Letzte Änderungen: | 04. Nov. 2020, 13:47 |