license: apache-2.0
language:
- gl
- eu
metrics:
- BLEU
- TER
Hitz Center’s Galician-Basque machine translation model
Model description
This model was trained from scratch using Marian NMT on a combination of Galician-Basque datasets totalling 13,125,745 sentence pairs. 413,057 sentence pairs were parallel data collected from the web while the remaining 12,712,688 sentence pairs were parallel synthetic data. The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
- Developed by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- Model type: traslation
- Source Language: Galician
- Target Language: Basque
- License: apache-2.0
Intended uses and limitations
You can use this model for machine translation from Galician to Basque.
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import MarianMTModel, MarianTokenizer
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
src_text = ["Esta é unha proba."]
model_name = "HiTZ/mt-hitz-gl-eu"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=T
rue))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])`
The recommended environments include the following transfomer versions: 4.12.3 , 4.15.0 , 4.26.1
Training Details
Training Data
The Galician-Basque data collected from the web was a combination of the following datasets:
Dataset | Sentences before cleaning |
---|---|
XLENT | 236,460 |
KDE4 | 96,509 |
WikiMatrix | 43,101 |
GNOME | 12,759 |
OpenSubtitles | 12,391 |
QED | 5,524 |
Ubuntu | 3,051 |
TED2020 v1 | 2,433 |
NeuLab-TedTalks | 804 |
ELRC-wikipedia_health | 25 |
Total | 413,057 |
The 12,712,688 sentence pairs of synthetic parallel data were created, on the one hand, by translating into Galician a compendium of ES-EU parallel corpora of 9,692,996 sentence pairs (using the ES-GL translator of the Nos project) and, on the other hand, by adapting into Galician 3,019,692 sentences in Portuguese from a PT-EU corpus (using the rule-base translator Apertium with the support of the transliterator Port2Gal)
Training Procedure
Preprocessing
After concatenation, all datasets are cleaned and deduplicated using bifixer (Ramírez-Sánchez et al., 2020) for identifying repetions and cleaning encoding problems and LaBSE embeddings to filter missaligned sentences. Any sentence pairs with a LaBSE similarity score of less than 0.5 is removed. The filtered corpus is composed of 12,699,402 parallel sentences.
Tokenization
All data is tokenized using sentencepiece, with a 32,000 token sentencepiece model learned from the combination of all filtered training data. This model is included.
Evaluation
Variable and metrics
We use the BLEU and TER scores for evaluation on test sets: Flores-200, TaCon and NTREX
Evaluation results
Below are the evaluation results on the machine translation from Galician to Basque compared to Google Translate, NLLB 200 3.3B and NLLB-200's distilled 1.3B variant:
####BLEU scores
Test set | Google Translate | NLLB 1.3B | NLLB 3.3 | mt-hitz-gl-eu |
---|---|---|---|---|
Flores 200 devtest | 17.0 | 12.7 | 12.5 | 16.7 |
TaCON | 13.7 | 10.8 | 10.7 | 14.1 |
NTREX | 14.0 | 10.6 | 9.5 | 13.3 |
Average | 14.9 | 11.4 | 10.9 | 14.7 |
####TER scores
Test set | Google Translate | NLLB 1.3B | NLLB 3.3 | mt-hitz-gl-eu |
---|---|---|---|---|
Flores 200 devtest | 64.4 | 77.8 | 72.0 | 66.5 |
TaCON | 64.2 | 81.7 | 73.2 | 65.4 |
NTREX | 68.9 | 79.4 | 77.2 | 70.0 |
Average | 65.8 | 79.6 | 74.1 | 67.3 |
Additional information
Author
HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
Contact information
For further information, send an email to hitz@ehu.eus
Licensing information
This work is licensed under a Apache License, Version 2.0
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334