Fairseq
Basque
Catalan
fdelucaf's picture
Update README.md
5656bd5 verified
metadata
license: apache-2.0
language:
  - eu
  - ca
metrics:
  - bleu
library_name: fairseq

Projecte Aina’s Basque-Catalan machine translation model

Model description

This model was trained from scratch using the Fairseq toolkit on a combination of Basque-Catalan datasets totalling approximately 75 million sentence pairs.Parallel Basque-Catalan data was collected from Opus and additional synthetic data was created from the Projecte Aina ES-CA corpus by translating the Spanish side using the ES-EU translator of HiTZ. The model was evaluated on the Flores and NTREX evaluation datasets.

Intended uses and limitations

You can use this model for machine translation from Basque to Catalan.

How to use

Usage

Required libraries:

pip install ctranslate2 pyonmttok

Translate a sentence using python

import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/aina-translator-eu-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Ongi etorri Aina proiektura.")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Training

Training data

The Basque-Catalan data is a combination of publicly available bilingual datasets collected from Opus. Additional synthetic parallel data were created from the Projecte Aina ES-CA corpus.

Training procedure

Data preparation

All datasets are filtered for language alignment, deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using LaBSE. The filtered datasets are then concatenated to form the final training corpus and before training the punctuation is normalized using a modified version of the join-single-file.py script from SoftCatalà.

Tokenization

All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.

Hyperparameters

The model is based on the Transformer-XLarge proposed by Subramanian et al. The following hyperparameters were set on the Fairseq toolkit:

Hyperparameter Value
Architecture transformer_vaswani_wmt_en_de_big
Embedding size 1024
Feedforward size 4096
Number of heads 16
Encoder layers 24
Decoder layers 6
Normalize before attention True
--share-decoder-input-output-embed True
--share-all-embeddings True
Effective batch size 48.000
Optimizer adam
Adam betas (0.9, 0.980)
Clip norm 0.0
Learning rate 5e-4
Lr. schedurer inverse sqrt
Warmup updates 8000
Dropout 0.1
Label smoothing 0.1

The model was trained for 19.000 updates on the parallel data collected from the web. This data was then concatenated with the synthetic parallel data and training continued for a total of 30.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.

Evaluation

Variable and metrics

We use the BLEU score for evaluation on test sets: Flores-200 and NTREX.

Evaluation results

Below are the evaluation results on the machine translation from Basque to Catalan compared to Google Translate, NLLB 200 3.3B and NLLB-200's distilled 1.3B variant:

Test set Google Translate NLLB 1.3B NLLB 3.3 aina-translator-eu-ca
Flores 200 devtest 29,7 17,7 26,5 28,5
NTREX 27,2 15,8 25,3 26,2
Average 28,5 16,75 25,9 27,4

Additional information

Author

The Language Technologies Unit from Barcelona Supercomputing Center.

Contact

For further information, please send an email to langtech@bsc.es.

Copyright

Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.

License

Apache License, Version 2.0

Funding

This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337.

Disclaimer

Click to expand

The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.

Be aware that the model may have biases and/or any other undesirable distortions.

When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties.