license: apache-2.0
Projecte Aina’s Galician-Catalan machine translation model
Table of Contents
- Model Description
- Intended Uses and Limitations
- How to Use
- Training
- Evaluation
- Additional Information
Model description
This model was trained from scratch using the Fairseq toolkit on a combination of Galician-Catalan datasets totalling 10.017.995 sentence pairs. 4.267.995 sentence pairs were parallel data collected from the web while the remaining 5.750.000 sentence pairs were parallel synthetic data created using the GL-ES translator of Proxecto Nós. The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
Intended uses and limitations
You can use this model for machine translation from Galician to Catalan.
How to use
Usage
Required libraries:
pip install ctranslate2 pyonmttok
Translate a sentence using python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-gl-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvido ao proxecto Ilenia.")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
Training
Training data
The Galician-Catalan data collected from the web was a combination of the following datasets:
Dataset | Sentences before cleaning |
---|---|
CCMatrix v1 | 3.041.152 |
XLENT | 371.377 |
WikiMatrix | 286.446 |
GNOME | 18 |
KDE4 | 147.182 |
TED2020 v1 | 11.041 |
OpenSubtitles | 16.379 |
Covost 2 | 263.729 |
Gene-Crawling | 38.320 |
Memories Projectes Lliures | 794.631 |
Total | 4.92.275 |
The datasets were concatentated before filtering to avoid intra-dataset duplicates and the final size was 4.267.995. The 5.750.000 sentence pairs of synthetic parallel data were created from a random sampling of the Projecte Aina ES-CA corpus
Training procedure
Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using LaBSE. The filtered datasets are then concatenated to form a final corpus of 10.017.995 and before training the punctuation is normalized using a modified version of the join-single-file.py script from SoftCatalà
Tokenization
All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
Hyperparameters
The model is based on the Transformer-XLarge proposed by Subramanian et al. The following hyperparameters were set on the Fairseq toolkit:
Hyperparameter | Value |
---|---|
Architecture | transformer_vaswani_wmt_en_de_big |
Embedding size | 1024 |
Feedforward size | 4096 |
Number of heads | 16 |
Encoder layers | 24 |
Decoder layers | 6 |
Normalize before attention | True |
--share-decoder-input-output-embed | True |
--share-all-embeddings | True |
Effective batch size | 48.000 |
Optimizer | adam |
Adam betas | (0.9, 0.980) |
Clip norm | 0.0 |
Learning rate | 5e-4 |
Lr. schedurer | inverse sqrt |
Warmup updates | 8000 |
Dropout | 0.1 |
Label smoothing | 0.1 |
The model was trained for 24.000 updates on the parallel data collected from the web. This data was then concatenated with the synthetic parallel data and training continued for a total of 34.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
Evaluation
Variable and metrics
We use the BLEU score for evaluation on test sets: Flores-200, TaCon and NTREX
Evaluation results
Below are the evaluation results on the machine translation from Galician to Catalan compared to Google Translate, M2M100 1.2B, NLLB 200 3.3B and NLLB-200's distilled 1.3B variant:
Test set | Google Translate | M2M100 1.2B | NLLB 1.3B | NLLB 3.3 | mt-aina-gl-ca |
---|---|---|---|---|---|
Flores 101 devtest | 36,4 | 32,6 | 22,3 | 34,3 | 32,4 |
TaCON | 48,4 | 56,5 | 32,2 | 54,1 | 58,2 |
NTREX | 34,7 | 34,0 | 20,4 | 34,2 | 33,7 |
Average | 39,0 | 41,0 | 25,0 | 40,9 | 41,4 |
Additional information
Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
Contact information
For further information, send an email to langtech@bsc.es
Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
Licensing information
This work is licensed under a Apache License, Version 2.0
Funding
This work was funded by the SEDIA within the framework of ILENIA