Logo Logo
Hilfe
Hilfe
Switch Language to English

Hofmann, Valentin; Schütze, Hinrich und Pierrehumbert, Janet B. (Mai 2022): An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers. 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, May 22-27, 2022. Muresan, Smarandakov; Nakov, Preslav und Villavicencio, Aline (Hrsg.): In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Stroudsburg, PA: Association for Computational Linguistics. S. 385-393 [PDF, 480kB]

[thumbnail of 2022.acl-short.43.pdf]
Vorschau
Download (480kB)

Abstract

We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the tokenization of pretrained language models (PLMs). FLOTA uses the vocabulary of a standard tokenizer but tries to preserve the morphological structure of words during tokenization. We evaluate FLOTA on morphological gold segmentations as well as a text classification task, using BERT, GPT-2, and XLNet as example PLMs. FLOTA leads to performance gains, makes inference more efficient, and enhances the robustness of PLMs with respect to whitespace noise.

Dokument bearbeiten Dokument bearbeiten