Better Low-Resource Machine Translation with Smaller Vocabularies

Varování

Publikace nespadá pod Ústav výpočetní techniky, ale pod Fakultu informatiky. Oficiální stránka publikace je na webu muni.cz.
Autoři

SIGNORONI Edoardo RYCHLÝ Pavel

Rok publikování 2024
Druh Článek ve sborníku
Konference Text, Speech, and Dialogue
Fakulta / Pracoviště MU

Fakulta informatiky

Citace
www https://link.springer.com/chapter/10.1007/978-3-031-70563-2_15
Doi http://dx.doi.org/10.1007/978-3-031-70563-2_15
Klíčová slova Low-resource;Neural Machine Translation;Tokenization
Přiložené soubory
Popis Data scarcity is still a major challenge in machine translation. The performance of state-of-the-art deep learning architectures, such as the Transformers, for under-resourced languages is well below the one for high-resourced languages. This precludes access to information for millions of speakers across the globe. Previous research has shown that the Transformer is highly sensitive to hyperparameters in low-resource conditions. One such parameter is the size of the subword vocabulary of the model. In this paper, we show that using smaller vocabularies, as low as 1k tokens, instead of the default value of 32k, is preferable in a diverse array of low-resource conditions. We experiment with different sizes on English-Akkadian, Lower Sorbian-German, English-Manipuri, to obtain models that are faster to train, smaller, and better performing than the default setting. These models achieve improvements of up to 322% ChrF score, while being up to 66% smaller and up to 17% faster to train.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info