Abstract
We introduce FLOTA (Few Longest Token Approximation), a simple yet effective method to improve the tokenization of pretrained language models (PLMs). FLOTA uses the vocabulary of a standard tokenizer but tries to preserve the morphological structure of words during tokenization. We evaluate FLOTA on morphological gold segmentations as well as a text classification task, using BERT, GPT-2, and XLNet as example PLMs. FLOTA leads to performance gains, makes inference more efficient, and enhances the robustness of PLMs with respect to whitespace noise.
| Item Type: | Conference or Workshop Item (Paper) |
|---|---|
| EU Funded Grant Agreement Number: | 740516 |
| EU Projects: | Horizon 2020 > ERC Grants > ERC Advanced Grant > ERC Grant 740516: NonSequeToR - Non-sequence models for tokenization replacement |
| Research Centers: | Center for Information and Language Processing (CIS) |
| Subjects: | 000 Computer science, information and general works > 000 Computer science, knowledge, and systems 400 Language > 400 Language 400 Language > 410 Linguistics |
| URN: | urn:nbn:de:bvb:19-epub-92203-0 |
| Place of Publication: | Stroudsburg, PA |
| Language: | English |
| Item ID: | 92203 |
| Date Deposited: | 27. May 2022 10:11 |
| Last Modified: | 27. May 2022 10:11 |

