Fairseq
Catalan
Portuguese
AudreyVM's picture
Add model card
e6cc1ca
|
raw
history blame
7.53 kB
metadata
license: cc-by-4.0

Projecte Aina’s Catalan-Portuguese machine translation model

Table of Contents

Model description

This model was trained from scratch using the Fairseq toolkit on a combination of Catalan-Portuguese datasets, which after filtering and cleaning comprised 6.159.631 sentence pairs. The model was evaluated on the Flores and NTREX evaluation datasets.

Intended uses and limitations

You can use this model for machine translation from Catalan to Portuguese.

How to use

Usage

Required libraries:

pip install ctranslate2 pyonmttok

Translate a sentence using python

import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-it", revision="main")

tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")

translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))

Training

Training data

The model was trained on a combination of the following datasets:

Dataset Sentences Sentences after Cleaning
CCMatrix v1 12.674.684 3.765.459
WikiMatrix 358.873 317.649
GNOME 5.211 1.752
KDE4 166.208 117.828
QED 53.635 43.736
TED2020 v1 48.942 41.461
OpenSubtitles 384.142 235.604
GlobalVoices 4.035 3.430
Tatoeba 754 723
Europarl 1.692.106 3.765.459
Total 15.391.745 6.159.631

All corpora except Europarl were collected from Opus. The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by SoftCatalà.

Training procedure

Data preparation

All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using LaBSE. The filtered datasets are then concatenated to form a final corpus of 6.159.631 and before training the punctuation is normalized using a modified version of the join-single-file.py script from SoftCatalà

Tokenization

All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.

Hyperparameters

The model is based on the Transformer-XLarge proposed by Subramanian et al. The following hyperparameters were set on the Fairseq toolkit:

Hyperparameter Value
Architecture transformer_vaswani_wmt_en_de_big
Embedding size 1024
Feedforward size 4096
Number of heads 16
Encoder layers 24
Decoder layers 6
Normalize before attention True
--share-decoder-input-output-embed True
--share-all-embeddings True
Effective batch size 48.000
Optimizer adam
Adam betas (0.9, 0.980)
Clip norm 0.0
Learning rate 5e-4
Lr. schedurer inverse sqrt
Warmup updates 8000
Dropout 0.1
Label smoothing 0.1

The model was trained for a total of 17.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.

Evaluation

Variable and metrics

We use the BLEU score for evaluation on the Flores-101 and NTREX test sets

Evaluation results

Below are the evaluation results on the machine translation from Catalan to Portuguese compared to Softcatalà and Google Translate:

Test set SoftCatalà Google Translate mt-aina-ca-pt
Flores 101 dev 30,9 41,4 34,3
Flores 101 devtest 31,6 41,3 35,2
NTREX 27,9 30,1 28,0
Average 30,1 37,6 32,5

Additional information

Author

Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center

Contact information

For further information, please send an email to langtech@bsc.es.

Copyright

Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)

Licensing information

This work is licensed under a Apache License, Version 2.0

Funding

This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.

Disclaimer

Click to expand

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.