tiedeman commited on
Commit
4c86a9a
1 Parent(s): 0aaf57b

Initial commit

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - bat
4
+ - lt
5
+ - lv
6
+ - ru
7
+ - zle
8
+
9
+ tags:
10
+ - translation
11
+
12
+ license: cc-by-4.0
13
+ model-index:
14
+ - name: opus-mt-tc-base-bat-zle
15
+ results:
16
+ - task:
17
+ name: Translation lav-rus
18
+ type: translation
19
+ args: lav-rus
20
+ dataset:
21
+ name: flores101-devtest
22
+ type: flores_101
23
+ args: lav rus devtest
24
+ metrics:
25
+ - name: BLEU
26
+ type: bleu
27
+ value: 21.1
28
+ - task:
29
+ name: Translation lit-rus
30
+ type: translation
31
+ args: lit-rus
32
+ dataset:
33
+ name: flores101-devtest
34
+ type: flores_101
35
+ args: lit rus devtest
36
+ metrics:
37
+ - name: BLEU
38
+ type: bleu
39
+ value: 21.3
40
+ - task:
41
+ name: Translation lav-rus
42
+ type: translation
43
+ args: lav-rus
44
+ dataset:
45
+ name: tatoeba-test-v2021-08-07
46
+ type: tatoeba_mt
47
+ args: lav-rus
48
+ metrics:
49
+ - name: BLEU
50
+ type: bleu
51
+ value: 60.5
52
+ - task:
53
+ name: Translation lit-rus
54
+ type: translation
55
+ args: lit-rus
56
+ dataset:
57
+ name: tatoeba-test-v2021-08-07
58
+ type: tatoeba_mt
59
+ args: lit-rus
60
+ metrics:
61
+ - name: BLEU
62
+ type: bleu
63
+ value: 54.9
64
+ ---
65
+ # opus-mt-tc-base-bat-zle
66
+
67
+ Neural machine translation model for translating from Baltic languages (bat) to East Slavic languages (zle).
68
+
69
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
70
+
71
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
72
+
73
+ ```
74
+ @inproceedings{tiedemann-thottingal-2020-opus,
75
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
76
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
77
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
78
+ month = nov,
79
+ year = "2020",
80
+ address = "Lisboa, Portugal",
81
+ publisher = "European Association for Machine Translation",
82
+ url = "https://aclanthology.org/2020.eamt-1.61",
83
+ pages = "479--480",
84
+ }
85
+
86
+ @inproceedings{tiedemann-2020-tatoeba,
87
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
88
+ author = {Tiedemann, J{\"o}rg},
89
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
90
+ month = nov,
91
+ year = "2020",
92
+ address = "Online",
93
+ publisher = "Association for Computational Linguistics",
94
+ url = "https://aclanthology.org/2020.wmt-1.139",
95
+ pages = "1174--1182",
96
+ }
97
+ ```
98
+
99
+ ## Model info
100
+
101
+ * Release: 2022-03-13
102
+ * source language(s): lav lit
103
+ * target language(s): rus
104
+ * model: transformer-align
105
+ * data: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
106
+ * tokenization: SentencePiece (spm32k,spm32k)
107
+ * original model: [opusTCv20210807_transformer-align_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.zip)
108
+ * more information released models: [OPUS-MT bat-zle README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-zle/README.md)
109
+
110
+ ## Usage
111
+
112
+ A short example code:
113
+
114
+ ```python
115
+ from transformers import MarianMTModel, MarianTokenizer
116
+
117
+ src_text = [
118
+ ">>rus<< Āfrika ir cilvēces šūpulis.",
119
+ ">>ukr<< Tomas yra mūsų kapitonas."
120
+ ]
121
+
122
+ model_name = "pytorch-models/opus-mt-tc-base-bat-zle"
123
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
124
+ model = MarianMTModel.from_pretrained(model_name)
125
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
126
+
127
+ for t in translated:
128
+ print( tokenizer.decode(t, skip_special_tokens=True) )
129
+
130
+ # expected output:
131
+ # Африка - это колыбель человечества.
132
+ # Томас - наш капітан.
133
+ ```
134
+
135
+ You can also use OPUS-MT models with the transformers pipelines, for example:
136
+
137
+ ```python
138
+ from transformers import pipeline
139
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-base-bat-zle")
140
+ print(pipe(">>rus<< Āfrika ir cilvēces šūpulis."))
141
+
142
+ # expected output: Африка - это колыбель человечества.
143
+ ```
144
+
145
+ ## Benchmarks
146
+
147
+ * test set translations: [opusTCv20210807_transformer-align_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.test.txt)
148
+ * test set scores: [opusTCv20210807_transformer-align_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-zle/opusTCv20210807_transformer-align_2022-03-13.eval.txt)
149
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
150
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
151
+
152
+ | langpair | testset | chr-F | BLEU | #sent | #words |
153
+ |----------|---------|-------|-------|-------|--------|
154
+ | lav-rus | tatoeba-test-v2021-08-07 | 0.75918 | 60.5 | 274 | 1541 |
155
+ | lit-rus | tatoeba-test-v2021-08-07 | 0.72796 | 54.9 | 3598 | 21908 |
156
+ | lav-rus | flores101-devtest | 0.49210 | 21.1 | 1012 | 23295 |
157
+ | lav-ukr | flores101-devtest | 0.48185 | 19.2 | 1012 | 22810 |
158
+ | lit-rus | flores101-devtest | 0.49850 | 21.3 | 1012 | 23295 |
159
+ | lit-ukr | flores101-devtest | 0.49114 | 19.5 | 1012 | 22810 |
160
+
161
+ ## Acknowledgements
162
+
163
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
164
+
165
+ ## Model conversion info
166
+
167
+ * transformers version: 4.16.2
168
+ * OPUS-MT git hash: 1bdabf7
169
+ * port time: Thu Mar 24 00:51:59 EET 2022
170
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ lav-rus flores101-dev 0.49207 21.8 997 22657
2
+ lav-ukr flores101-dev 0.47027 18.5 997 21841
3
+ lit-rus flores101-dev 0.50300 21.8 997 22657
4
+ lit-ukr flores101-dev 0.48536 18.9 997 21841
5
+ lav-rus flores101-devtest 0.49210 21.1 1012 23295
6
+ lav-ukr flores101-devtest 0.48185 19.2 1012 22810
7
+ lit-rus flores101-devtest 0.49850 21.3 1012 23295
8
+ lit-ukr flores101-devtest 0.49114 19.5 1012 22810
9
+ lav-rus tatoeba-test-v2020-07-28 0.75918 60.5 274 1541
10
+ lit-rus tatoeba-test-v2020-07-28 0.74583 56.8 2500 15395
11
+ lav-rus tatoeba-test-v2021-03-30 0.75899 60.4 276 1554
12
+ lit-rus tatoeba-test-v2021-03-30 0.73659 55.7 5296 32463
13
+ lav-rus tatoeba-test-v2021-08-07 0.75918 60.5 274 1541
14
+ lit-rus tatoeba-test-v2021-08-07 0.72796 54.9 3598 21908
benchmark_translations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a4686084dbafa72aec5ce73515e2bc03ced93ed6791952d62e522b10d527f89
3
+ size 2082868
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "swish",
4
+ "architectures": [
5
+ "MarianMTModel"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bad_words_ids": [
9
+ [
10
+ 61943
11
+ ]
12
+ ],
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 512,
16
+ "decoder_attention_heads": 8,
17
+ "decoder_ffn_dim": 2048,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 61943,
21
+ "decoder_vocab_size": 61944,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 8,
24
+ "encoder_ffn_dim": 2048,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 24356,
28
+ "forced_eos_token_id": 24356,
29
+ "init_std": 0.02,
30
+ "is_encoder_decoder": true,
31
+ "max_length": 512,
32
+ "max_position_embeddings": 512,
33
+ "model_type": "marian",
34
+ "normalize_embedding": false,
35
+ "num_beams": 4,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 61943,
38
+ "scale_embedding": true,
39
+ "share_encoder_decoder_embeddings": true,
40
+ "static_position_embeddings": true,
41
+ "torch_dtype": "float16",
42
+ "transformers_version": "4.18.0.dev0",
43
+ "use_cache": true,
44
+ "vocab_size": 61944
45
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6cf85424746da71e74395b2977c3404234ad5b37cf416be7f24f764777151fe
3
+ size 215353859
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1cee20cd6d7b2bec4c73a5125350c0176d2349d3ebcfd4bfa001b73831cd057
3
+ size 827278
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:597c2d06f7ea38e1200dcc6816f023c584ac8c9647acb4895ea2216b796d4e5a
3
+ size 1018446
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"source_lang": "bat", "target_lang": "zle", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20210807_transformer-align_2022-03-13/bat-zle", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff