Fairseq
Catalan
Italian
AudreyVM commited on
Commit
c84a16c
1 Parent(s): 28c4b2a

update model card

Browse files
Files changed (1) hide show
  1. README.md +157 -1
README.md CHANGED
@@ -1,3 +1,159 @@
1
  ---
2
- license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  ---
4
+ ## Projecte Aina’s Catalan-Italian machine translation model
5
+
6
+ ## Table of Contents
7
+ - [Model Description](#model-description)
8
+ - [Intended Uses and Limitations](#intended-use)
9
+ - [How to Use](#how-to-use)
10
+ - [Training](#training)
11
+ - [Training data](#training-data)
12
+ - [Training procedure](#training-procedure)
13
+ - [Data Preparation](#data-preparation)
14
+ - [Tokenization](#tokenization)
15
+ - [Hyperparameters](#hyperparameters)
16
+ - [Evaluation](#evaluation)
17
+ - [Variable and Metrics](#variable-and-metrics)
18
+ - [Evaluation Results](#evaluation-results)
19
+ - [Additional Information](#additional-information)
20
+ - [Author](#author)
21
+ - [Contact Information](#contact-information)
22
+ - [Copyright](#copyright)
23
+ - [Licensing Information](#licensing-information)
24
+ - [Funding](#funding)
25
+ - [Disclaimer](#disclaimer)
26
+
27
+ ## Model description
28
+
29
+ This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Catalan-Italian datasets, which after filtering and cleaning comprised 9.482.927 sentence pairs. The model was evaluated on the Flores and NTREX evaluation datasets.
30
+
31
+ ## Intended uses and limitations
32
+
33
+ You can use this model for machine translation from Catalan to Italian.
34
+
35
+ ## How to use
36
+
37
+ ### Usage
38
+ Required libraries:
39
+
40
+ ```bash
41
+ pip install ctranslate2 pyonmttok
42
+ ```
43
+
44
+ Translate a sentence using python
45
+ ```python
46
+ import ctranslate2
47
+ import pyonmttok
48
+ from huggingface_hub import snapshot_download
49
+ model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-ca-it", revision="main")
50
+
51
+ tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
52
+ tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
53
+
54
+ translator = ctranslate2.Translator(model_dir)
55
+ translated = translator.translate_batch([tokenized[0]])
56
+ print(tokenizer.detokenize(translated[0][0]['tokens']))
57
+ ```
58
+
59
+ ## Training
60
+
61
+ ### Training data
62
+
63
+ The model was trained on a combination of the following datasets:
64
+
65
+ | Dataset | Sentences | Sentences after Cleaning|
66
+ |-------------------|----------------|-------------------|
67
+ | CCMatrix v1 | 11.444.720 | 7.757.357|
68
+ | MultiCCAligned v1 | 1.379.251 | 1.010.921|
69
+ | WikiMatrix | 316.208 | 271.587 |
70
+ | GNOME | 8.571 | 1.198|
71
+ | KDE4 | 163.907 | 115.027 |
72
+ | QED | 64.630 | 52.616 |
73
+ | TED2020 v1 | 50.897 | 43.280 |
74
+ | OpenSubtitles | 391.293 | 225.732 |
75
+ | GlobalVoices| 6.318 | 5.209|
76
+ | **Total** | **13.825.795** | **9.482.927** |
77
+
78
+ ### Training procedure
79
+
80
+ ### Data preparation
81
+
82
+ All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 9.482.927 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
83
+
84
+
85
+ #### Tokenization
86
+
87
+ All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
88
+
89
+ #### Hyperparameters
90
+
91
+ The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
92
+ The following hyperparamenters were set on the Fairseq toolkit:
93
+
94
+ | Hyperparameter | Value |
95
+ |------------------------------------|----------------------------------|
96
+ | Architecture | transformer_vaswani_wmt_en_de_big |
97
+ | Embedding size | 1024 |
98
+ | Feedforward size | 4096 |
99
+ | Number of heads | 16 |
100
+ | Encoder layers | 24 |
101
+ | Decoder layers | 6 |
102
+ | Normalize before attention | True |
103
+ | --share-decoder-input-output-embed | True |
104
+ | --share-all-embeddings | True |
105
+ | Effective batch size | 48.000 |
106
+ | Optimizer | adam |
107
+ | Adam betas | (0.9, 0.980) |
108
+ | Clip norm | 0.0 |
109
+ | Learning rate | 5e-4 |
110
+ | Lr. schedurer | inverse sqrt |
111
+ | Warmup updates | 8000 |
112
+ | Dropout | 0.1 |
113
+ | Label smoothing | 0.1 |
114
+
115
+ The model was trained for a total of 36.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
116
+
117
+ ## Evaluation
118
+
119
+ ### Variable and metrics
120
+
121
+ We use the BLEU score for evaluation on the Flores test set: [Flores-101](https://github.com/facebookresearch/flores),
122
+
123
+ ### Evaluation results
124
+
125
+ Below are the evaluation results on the machine translation from Spanish to Catalan compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es):
126
+
127
+ | Test set | SoftCatalà | Google Translate |mt-plantl-es-ca|
128
+ |----------------------|------------|------------------|---------------|
129
+ | Flores 101 dev | 24,3 | **28,5** | 26,1 |
130
+ | Flores 101 devtest |24,7 | **29,1** | 26,3 |
131
+ | Average | 24,5 | **28,8** | 26,2 |
132
+
133
+ ## Additional information
134
+
135
+ ### Author
136
+ Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center (langtech@bsc.es)
137
+
138
+ ### Contact information
139
+ For further information, send an email to <aina@bsc.es>
140
+
141
+ ### Copyright
142
+ Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
143
+
144
+ ### Licensing information
145
+ This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
146
+
147
+ ### Funding
148
+ This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
149
+
150
+ ### Disclaimer
151
+
152
+ <details>
153
+ <summary>Click to expand</summary>
154
+
155
+ The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
156
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
157
+ In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
158
+ </details>
159
+