Xabi Ezpeleta commited on
Commit
9a20592
1 Parent(s): 047f26c

Copy original files

Browse files
Files changed (7) hide show
  1. README.md +140 -0
  2. config.json +8 -43
  3. source.spm +0 -0
  4. special_tokens_map.json +1 -0
  5. target.spm +0 -0
  6. tokenizer_config.json +1 -0
  7. vocab.json +0 -0
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - eu
6
+ metrics:
7
+ - BLEU
8
+ - TER
9
+ ---
10
+ ## Hitz Center’s English-Basque machine translation model
11
+
12
+ ## Model description
13
+
14
+ This model was trained from scratch using [Marian NMT](https://marian-nmt.github.io/) on a combination of English-Basque datasets totalling 20,523,431 sentence pairs. 9,033,998 sentence pairs were parallel data collected from the web while the remaining 11,489,433 sentence pairs were parallel synthetic data created using the [Google Translate translator](https://translate.google.com/about/). The model was evaluated on the Flores, TaCon and NTREX evaluation datasets.
15
+
16
+ - **Developed by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
17
+ - **Model type:** traslation
18
+ - **Source Language:** English
19
+ - **Target Language:** Basque
20
+ - **License:** apache-2.0
21
+
22
+ ## Intended uses and limitations
23
+
24
+ You can use this model for machine translation from English to Basque.
25
+
26
+ At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources.
27
+
28
+ ## How to Get Started with the Model
29
+
30
+ Use the code below to get started with the model.
31
+
32
+ ```
33
+ from transformers import MarianMTModel, MarianTokenizer
34
+ from transformers import AutoTokenizer
35
+ from transformers import AutoModelForSeq2SeqLM
36
+
37
+ src_text = ["this is a test"]
38
+
39
+ model_name = "HiTZ/mt-hitz-en-eu"
40
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
41
+
42
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
43
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=T
44
+ rue))
45
+ print([tokenizer.decode(t, skip_special_tokens=True) for t in translated])`
46
+ ```
47
+ The recommended environments include the following transfomer versions: 4.12.3 , 4.15.0 , 4.26.1
48
+
49
+ ## Training Details
50
+
51
+ ### Training Data
52
+
53
+ The English-Basque data collected from the web was a combination of the following datasets:
54
+
55
+ | Dataset | Sentences before cleaning |
56
+ |-----------------|--------------------------:|
57
+ | CCMatrix v1 | 7,788,871 |
58
+ | EhuHac | 585,210 |
59
+ | Ehuskaratuak | 482,259 |
60
+ | Ehuskaratuak | 482,259 |
61
+ | Elhuyar | 1,176,529 |
62
+ | HPLT | 4,546,563 |
63
+ | OpenSubtitles | 805,780 |
64
+ | PaCO_2012 | 109,524 |
65
+ | PaCO_2013 | 48,892 |
66
+ | WikiMatrix | 119,480 |
67
+ | **Total** | **15,653,108** |
68
+
69
+
70
+
71
+ The 11,489,433 sentence pairs of synthetic parallel data were created by translating a compendium of ES-EU parallel corpora into English using the [ES-EN translator from Google Translate](https://translate.google.com/about/).
72
+
73
+ ### Training Procedure
74
+
75
+ #### Preprocessing
76
+
77
+ After concatenation, all datasets are cleaned and deduplicated using [bifixer](https://github.com/bitextor/bifixer) [(Ramírez-Sánchez et al., 2020)](https://aclanthology.org/2020.eamt-1.31/) for identifying repetions and cleaning encoding problems and LaBSE embeddings to filter missaligned sentences. Any sentence pairs with a LaBSE similarity score of less than 0.5 is removed. The filtered corpus is composed of 9,033,998 parallel sentences.
78
+
79
+ #### Tokenization
80
+ All data is tokenized using sentencepiece, with a 32,000 token sentencepiece model learned from the combination of all filtered training data. This model is included.
81
+
82
+ ## Evaluation
83
+ ### Variable and metrics
84
+ We use the BLEU and TER scores for evaluation on test sets: [Flores-200](https://github.com/facebookresearch/flores/tree/main/flores200), [TaCon](https://elrc-share.eu/repository/browse/tacon-spanish-constitution-mt-test-set/84a96138b98611ec9c1a00155d02670628f3e6857b0f422abd82abc3795ec8c2/) and [NTREX](https://github.com/MicrosoftTranslator/NTREX)
85
+
86
+ ### Evaluation results
87
+ Below are the evaluation results on the machine translation from English to Basque compared to [Google Translate](https://translate.google.com/) and [NLLB 200 3.3B](https://huggingface.co/facebook/nllb-200-3.3B):
88
+
89
+ ####BLEU scores
90
+
91
+ | Test set |Google Translate | NLLB 3.3 |mt-hitz-en-eu|
92
+ |----------------------|-----------------|----------|-------------|
93
+ | Flores 200 devtest |**20.5** | 13.3 | 19.2 |
94
+ | TaCON | **12.1** | 9.4 | 8.8 |
95
+ | NTREX | **15.7** | 8.0 | 14.5 |
96
+ | Average | **16.1** | 10.2 | 14.2 |
97
+
98
+ ####TER scores
99
+
100
+ | Test set |Google Translate | NLLB 3.3 |mt-hitz-en-eu|
101
+ |----------------------|-----------------|----------|-------------|
102
+ | Flores 200 devtest |**59.5** | 70.4 | 65.0 |
103
+ | TaCON |**69.5** | 75.3 | 76.8 |
104
+ | NTREX |**65.8** | 81.6 | 66.7 |
105
+ | Average |**64.9** | 75.8 | **68.2** |
106
+
107
+
108
+
109
+ <!-- Momentuz ez dugu artikulurik. ILENIAn zerbait egiten bada eguneratu beharko da -->
110
+
111
+ <!--
112
+ ## Citation [optional]
113
+
114
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. - ->
115
+
116
+ **BibTeX:**
117
+
118
+ [More Information Needed]
119
+
120
+ **APA:**
121
+
122
+ [More Information Needed]
123
+ -->
124
+
125
+ ## Additional information
126
+ ### Author
127
+ HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
128
+ ### Contact information
129
+ For further information, send an email to <hitz@ehu.eus>
130
+ ### Licensing information
131
+ This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
132
+ ### Funding
133
+ This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334
134
+ ### Disclaimer
135
+ <details>
136
+ <summary>Click to expand</summary>
137
+ The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
138
+ When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
139
+ In no event shall the owner and creator of the models (HiTZ Research Center) be liable for any results arising from the use made by third parties of these models.
140
+ </details>
config.json CHANGED
@@ -1,45 +1,10 @@
1
  {
2
- "activation_dropout": 0.0,
3
- "activation_function": "swish",
4
- "architectures": [
5
- "MarianMTModel"
6
- ],
7
- "attention_dropout": 0.0,
8
- "bad_words_ids": [
9
- [
10
- 8000
11
- ]
12
- ],
13
- "bos_token_id": 0,
14
- "classifier_dropout": 0.0,
15
- "d_model": 512,
16
- "decoder_attention_heads": 8,
17
- "decoder_ffn_dim": 2048,
18
- "decoder_layerdrop": 0.0,
19
- "decoder_layers": 6,
20
- "decoder_start_token_id": 8000,
21
- "decoder_vocab_size": 8001,
22
- "dropout": 0.1,
23
- "encoder_attention_heads": 8,
24
- "encoder_ffn_dim": 2048,
25
- "encoder_layerdrop": 0.0,
26
- "encoder_layers": 6,
27
- "eos_token_id": 0,
28
- "forced_eos_token_id": 0,
29
- "init_std": 0.02,
30
- "is_encoder_decoder": true,
31
- "max_length": 512,
32
- "max_position_embeddings": 512,
33
- "model_type": "marian",
34
- "normalize_embedding": false,
35
- "num_beams": 6,
36
- "num_hidden_layers": 6,
37
- "pad_token_id": 8000,
38
- "scale_embedding": true,
39
- "share_encoder_decoder_embeddings": true,
40
- "static_position_embeddings": true,
41
- "torch_dtype": "float16",
42
- "transformers_version": "4.12.3",
43
- "use_cache": true,
44
- "vocab_size": 8001
45
  }
 
1
  {
2
+ "add_source_bos": false,
3
+ "add_source_eos": false,
4
+ "bos_token": "<s>",
5
+ "decoder_start_token": "</s>",
6
+ "eos_token": "</s>",
7
+ "layer_norm_epsilon": null,
8
+ "multi_query_attention": false,
9
+ "unk_token": "<unk>"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  }
source.spm ADDED
Binary file (374 kB). View file
 
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
Binary file (374 kB). View file
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "-ca", "target_lang": "eu", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "tokenizer_file": null, "name_or_path": "/ikerlariak/jiblaing/NMT_2023/en-eu_SINT/model", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff