tiedeman commited on
Commit
02acbd0
1 Parent(s): 89e81a0

Initial commit

Browse files
.gitattributes CHANGED
@@ -29,3 +29,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
  *.zip filter=lfs diff=lfs merge=lfs -text
30
  *.zst filter=lfs diff=lfs merge=lfs -text
31
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
29
  *.zip filter=lfs diff=lfs merge=lfs -text
30
  *.zst filter=lfs diff=lfs merge=lfs -text
31
  *tfevents* filter=lfs diff=lfs merge=lfs -text
32
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fi
4
+ - sl
5
+ - zls
6
+
7
+ tags:
8
+ - translation
9
+ - opus-mt-tc
10
+
11
+ license: cc-by-4.0
12
+ model-index:
13
+ - name: opus-mt-tc-big-fi-zls
14
+ results:
15
+ - task:
16
+ name: Translation fin-bul
17
+ type: translation
18
+ args: fin-bul
19
+ dataset:
20
+ name: flores101-devtest
21
+ type: flores_101
22
+ args: fin bul devtest
23
+ metrics:
24
+ - name: BLEU
25
+ type: bleu
26
+ value: 26.2
27
+ - name: chr-F
28
+ type: chrf
29
+ value: 0.54912
30
+ - task:
31
+ name: Translation fin-hrv
32
+ type: translation
33
+ args: fin-hrv
34
+ dataset:
35
+ name: flores101-devtest
36
+ type: flores_101
37
+ args: fin hrv devtest
38
+ metrics:
39
+ - name: BLEU
40
+ type: bleu
41
+ value: 21.3
42
+ - name: chr-F
43
+ type: chrf
44
+ value: 0.51468
45
+ - task:
46
+ name: Translation fin-slv
47
+ type: translation
48
+ args: fin-slv
49
+ dataset:
50
+ name: flores101-devtest
51
+ type: flores_101
52
+ args: fin slv devtest
53
+ metrics:
54
+ - name: BLEU
55
+ type: bleu
56
+ value: 22.3
57
+ - name: chr-F
58
+ type: chrf
59
+ value: 0.51226
60
+ - task:
61
+ name: Translation fin-srp_Cyrl
62
+ type: translation
63
+ args: fin-srp_Cyrl
64
+ dataset:
65
+ name: flores101-devtest
66
+ type: flores_101
67
+ args: fin srp_Cyrl devtest
68
+ metrics:
69
+ - name: BLEU
70
+ type: bleu
71
+ value: 21.8
72
+ - name: chr-F
73
+ type: chrf
74
+ value: 0.50774
75
+ ---
76
+ # opus-mt-tc-big-fi-zls
77
+
78
+ ## Table of Contents
79
+ - [Model Details](#model-details)
80
+ - [Uses](#uses)
81
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
82
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
83
+ - [Training](#training)
84
+ - [Evaluation](#evaluation)
85
+ - [Citation Information](#citation-information)
86
+ - [Acknowledgements](#acknowledgements)
87
+
88
+ ## Model Details
89
+
90
+ Neural machine translation model for translating from Finnish (fi) to South Slavic languages (zls).
91
+
92
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
93
+ **Model Description:**
94
+ - **Developed by:** Language Technology Research Group at the University of Helsinki
95
+ - **Model Type:** Translation (transformer-big)
96
+ - **Release**: 2022-07-23
97
+ - **License:** CC-BY-4.0
98
+ - **Language(s):**
99
+ - Source Language(s): fin
100
+ - Target Language(s): slv
101
+ - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip)
102
+ - **Resources for more information:**
103
+ - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
104
+ - More information about released models for this language pair: [OPUS-MT fin-zls README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fin-zls/README.md)
105
+ - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
106
+ - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/
107
+
108
+ ## Uses
109
+
110
+ This model can be used for translation and text-to-text generation.
111
+
112
+ ## Risks, Limitations and Biases
113
+
114
+ **CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
115
+
116
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
117
+
118
+ ## How to Get Started With the Model
119
+
120
+ A short example code:
121
+
122
+ ```python
123
+ from transformers import MarianMTModel, MarianTokenizer
124
+
125
+ src_text = [
126
+ ">>bul<< Ajattelen vain sinua.",
127
+ ">>slv<< Virtahevot rakastavat vettä."
128
+ ]
129
+
130
+ model_name = "pytorch-models/opus-mt-tc-big-fi-zls"
131
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
132
+ model = MarianMTModel.from_pretrained(model_name)
133
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
134
+
135
+ for t in translated:
136
+ print( tokenizer.decode(t, skip_special_tokens=True) )
137
+
138
+ # expected output:
139
+ # Мисля само за теб.
140
+ # Povodni konji obožujejo vodo.
141
+ ```
142
+
143
+ You can also use OPUS-MT models with the transformers pipelines, for example:
144
+
145
+ ```python
146
+ from transformers import pipeline
147
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fi-zls")
148
+ print(pipe(">>bul<< Ajattelen vain sinua."))
149
+
150
+ # expected output: Мисля само за теб.
151
+ ```
152
+
153
+ ## Training
154
+
155
+ - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
156
+ - **Pre-processing**: SentencePiece (spm32k,spm32k)
157
+ - **Model Type:** transformer-big
158
+ - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.zip)
159
+ - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
160
+
161
+ ## Evaluation
162
+
163
+ * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.test.txt)
164
+ * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fin-zls/opusTCv20210807_transformer-big_2022-07-23.eval.txt)
165
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
166
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
167
+
168
+ | langpair | testset | chr-F | BLEU | #sent | #words |
169
+ |----------|---------|-------|-------|-------|--------|
170
+ | fin-bul | flores101-devtest | 0.54912 | 26.2 | 1012 | 24700 |
171
+ | fin-hrv | flores101-devtest | 0.51468 | 21.3 | 1012 | 22423 |
172
+ | fin-slv | flores101-devtest | 0.51226 | 22.3 | 1012 | 23425 |
173
+ | fin-srp_Cyrl | flores101-devtest | 0.50774 | 21.8 | 1012 | 23456 |
174
+
175
+ ## Citation Information
176
+
177
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
178
+
179
+ ```
180
+ @inproceedings{tiedemann-thottingal-2020-opus,
181
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
182
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
183
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
184
+ month = nov,
185
+ year = "2020",
186
+ address = "Lisboa, Portugal",
187
+ publisher = "European Association for Machine Translation",
188
+ url = "https://aclanthology.org/2020.eamt-1.61",
189
+ pages = "479--480",
190
+ }
191
+
192
+ @inproceedings{tiedemann-2020-tatoeba,
193
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
194
+ author = {Tiedemann, J{\"o}rg},
195
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
196
+ month = nov,
197
+ year = "2020",
198
+ address = "Online",
199
+ publisher = "Association for Computational Linguistics",
200
+ url = "https://aclanthology.org/2020.wmt-1.139",
201
+ pages = "1174--1182",
202
+ }
203
+ ```
204
+
205
+ ## Acknowledgements
206
+
207
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
208
+
209
+ ## Model conversion info
210
+
211
+ * transformers version: 4.16.2
212
+ * OPUS-MT git hash: 8b9f0b0
213
+ * port time: Fri Aug 12 19:46:27 EEST 2022
214
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
1
+ fin-bul flores101-dev 0.54702 26.1 997 23520
2
+ fin-hrv flores101-dev 0.51782 21.7 997 21567
3
+ fin-slv flores101-dev 0.51560 22.6 997 22448
4
+ fin-srp_Cyrl flores101-dev 0.50989 22.5 997 22384
5
+ fin-bul flores101-devtest 0.54912 26.2 1012 24700
6
+ fin-hrv flores101-devtest 0.51468 21.3 1012 22423
7
+ fin-slv flores101-devtest 0.51226 22.3 1012 23425
8
+ fin-srp_Cyrl flores101-devtest 0.50774 21.8 1012 23456
benchmark_translations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32f9fd7f8ccbe479c9fb81a8d3743776d8c7e6f2ec72c0bd4de85ca95d823d3b
3
+ size 1386524
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "relu",
4
+ "architectures": [
5
+ "MarianMTModel"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bad_words_ids": [
9
+ [
10
+ 59930
11
+ ]
12
+ ],
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 59930,
21
+ "decoder_vocab_size": 59931,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 31784,
28
+ "forced_eos_token_id": 31784,
29
+ "init_std": 0.02,
30
+ "is_encoder_decoder": true,
31
+ "max_length": 512,
32
+ "max_position_embeddings": 1024,
33
+ "model_type": "marian",
34
+ "normalize_embedding": false,
35
+ "num_beams": 4,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 59930,
38
+ "scale_embedding": true,
39
+ "share_encoder_decoder_embeddings": true,
40
+ "static_position_embeddings": true,
41
+ "torch_dtype": "float16",
42
+ "transformers_version": "4.18.0.dev0",
43
+ "use_cache": true,
44
+ "vocab_size": 59931
45
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46c47db9310c640f943fe3e86ed99a36f794dff74ad4ff4cdcb77c6b39d7f5fe
3
+ size 598403651
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b4ced141eaa1550cc1b7ebe633bfe50bc897958f84a511f16bb1ec6d8b5244d
3
+ size 831346
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41d643532a38592216444fc732f50a8833bb4cd2070a966a81ddf92aa2620c55
3
+ size 876650
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"source_lang": "fi", "target_lang": "zls", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20210807_transformer-big_2022-07-23/fi-zls", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff