BPE Gets Picky: Efficient Vocabulary Refinement During Tokenizer Training

« Language models can largely benefit from efficient tokenization. However, they still mostly utilize the classical BPE algorithm, a simple and reliable method. This has been shown to cause such issues as under-trained tokens and sub-optimal compression that may affect the downstream performance. We introduce Picky BPE, a modified BPE algorithm that carries out vocabulary refinement during tokenizer training. Our method improves vocabulary efficiency, eliminates under-trained tokens, and does not compromise text compression. Our experiments show that our method does not reduce the downstream performance, and in several cases improves it. (…) »

source > arxiv.org, Pavel Chizhov, Catherine Arnett, Elizaveta Korotkova, Ivan P. Yamshchikov, arXiv:2409.04599v1, https://doi.org/10.48550/arXiv.2409.04599

Accueil