license: apache-2.0
language:
- eu
- es
metrics:
- BLEU
- TER
Hitz Center’s Basque-Spanish machine translation model
Model description
This model was trained from scratch using Marian NMT on a combination of Spanish-Basque datasets totalling 104,417,271 sentence pairs. 12,091,549 sentence pairs were parallel data collected from the web while the remaining 92,325,722 sentence pairs were parallel synthetic data created backtranslating Oscar Spanish monolingual dataset. The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
- Developed by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- Model type: traslation
- Source Language: Basque
- Target Language: Spanish
- License: apache-2.0
Intended uses and limitations
You can use this model for machine translation from Basque to Spanish.
At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import MarianMTModel, MarianTokenizer
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
src_text = ["Hau proba bat da."]
model_name = "HiTZ/mt-hitz-eu-es"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=T
rue))
print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])`
The recommended environments include the following transfomer versions: 4.12.3 , 4.15.0 , 4.26.1
Training Details
Training Data
The Spanish-Basque data collected from the web was a combination of the following datasets:
Dataset | Sentences before cleaning |
---|---|
CCMatrix | 6,564,108 |
MultiParaCrawl | 3,344,373 |
Paracrawl | 2,410,895 |
TranslationMemories_EJ | 1,127,141 |
OpenData2017 (IWSLT18) | 926,941 |
OpenSubtitles | 793,593 |
TranslationMemories_GD | 788,776 |
EhuHac | 609,912 |
OPUS-Elhuyar | 642,347 |
EiTB-ParCC | 637,182 |
WikiMatrix | 154,281 |
Total | ** 12,091,549 ** |
The 92,325,722 sentence pairs of synthetic parallel data were created by backtranslating the EusCrawl Basque monolingual dataset using a previous version (without synthetic parallel data) of the ES-EU translator from the HiTZ center.
Training Procedure
Preprocessing
After concatenation, all datasets are cleaned and deduplicated using bifixer and biclener tools (Ramírez-Sánchez et al., 2020). Any sentence pairs with a classification score of less than 0.5 is removed. The filtered corpus is composed of 100,843,973 parallel sentences.
Tokenization
All data is tokenized using sentencepiece, with a 32,000 token sentencepiece model learned from the combination of all filtered training data. This model is included.
Evaluation
Variable and metrics
We use the BLEU and TER scores for evaluation on test sets: Flores-200, TaCon and NTREX
Evaluation results
Below are the evaluation results on the machine translation from Basque to Spanish compared to Google Translate and NLLB 200 3.3B:
####BLEU scores
Test set | Google Translate | NLLB 3.3B | mt-hitz-eu-es |
---|---|---|---|
Flores 200 devtest | 22.1 | 21.3 | 20.4 |
TaCON | 34.7 | 31.7 | 37.7 |
NTREX | 28.8 | 27.8 | 26.9 |
Average | 28.5 | 26.9 | 28.3 |
####TER scores
Test set | Google Translate | NLLB 3.3 | mt-hitz-eu-es |
---|---|---|---|
Flores 200 devtest | 59.2 | 61.6 | 61.2 |
TaCON | 46.6 | 51.7 | 44.6 |
NTREX | 55.5 | 57.6 | 57.2 |
Average | 53.8 | 57.0 | 54.3 |
Additional information
Author
HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
Contact information
For further information, send an email to hitz@ehu.eus
Licensing information
This work is licensed under a Apache License, Version 2.0
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334