Abstract
This paper presents our latest investigation of recurrent neural etworks for the slot filling task of spoken language understanding. We implement a bi-directional Elman-type recurrent neural network which takes the information not only from the past but also from the future context to predict the semantic label of the target word. Furthermore, we propose to use ranking loss function to train the model. This improves the performance over the cross entropy loss function. On the ATIS benchmark data set, we achieve a new state-of-theart result of 95.56% F1-score without using any additional knowledge or data sources.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Fakultät: | Sprach- und Literaturwissenschaften |
Themengebiete: | 400 Sprache > 400 Sprache |
ISSN: | 1520-6149 |
Sprache: | Englisch |
Dokumenten ID: | 47207 |
Datum der Veröffentlichung auf Open Access LMU: | 27. Apr. 2018, 08:12 |
Letzte Änderungen: | 04. Nov. 2020, 13:24 |