tiedeman commited on
Commit
5980a16
1 Parent(s): 062d7a4

Initial commit

Browse files
.gitattributes CHANGED
@@ -26,3 +26,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
  *.zip filter=lfs diff=lfs merge=lfs -text
27
  *.zstandard filter=lfs diff=lfs merge=lfs -text
28
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
26
  *.zip filter=lfs diff=lfs merge=lfs -text
27
  *.zstandard filter=lfs diff=lfs merge=lfs -text
28
  *tfevents* filter=lfs diff=lfs merge=lfs -text
29
+ *.spm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - bs_Latn
4
+ - en
5
+ - hr
6
+ - sh
7
+ - sr_Cyrl
8
+ - sr_Latn
9
+
10
+ tags:
11
+ - translation
12
+
13
+ license: cc-by-4.0
14
+ model-index:
15
+ - name: opus-mt-tc-big-sh-en
16
+ results:
17
+ - task:
18
+ name: Translation hrv-eng
19
+ type: translation
20
+ args: hrv-eng
21
+ dataset:
22
+ name: flores101-devtest
23
+ type: flores_101
24
+ args: hrv eng devtest
25
+ metrics:
26
+ - name: BLEU
27
+ type: bleu
28
+ value: 37.1
29
+ - task:
30
+ name: Translation bos_Latn-eng
31
+ type: translation
32
+ args: bos_Latn-eng
33
+ dataset:
34
+ name: tatoeba-test-v2021-08-07
35
+ type: tatoeba_mt
36
+ args: bos_Latn-eng
37
+ metrics:
38
+ - name: BLEU
39
+ type: bleu
40
+ value: 66.5
41
+ - task:
42
+ name: Translation hbs-eng
43
+ type: translation
44
+ args: hbs-eng
45
+ dataset:
46
+ name: tatoeba-test-v2021-08-07
47
+ type: tatoeba_mt
48
+ args: hbs-eng
49
+ metrics:
50
+ - name: BLEU
51
+ type: bleu
52
+ value: 56.4
53
+ - task:
54
+ name: Translation hrv-eng
55
+ type: translation
56
+ args: hrv-eng
57
+ dataset:
58
+ name: tatoeba-test-v2021-08-07
59
+ type: tatoeba_mt
60
+ args: hrv-eng
61
+ metrics:
62
+ - name: BLEU
63
+ type: bleu
64
+ value: 58.8
65
+ - task:
66
+ name: Translation srp_Cyrl-eng
67
+ type: translation
68
+ args: srp_Cyrl-eng
69
+ dataset:
70
+ name: tatoeba-test-v2021-08-07
71
+ type: tatoeba_mt
72
+ args: srp_Cyrl-eng
73
+ metrics:
74
+ - name: BLEU
75
+ type: bleu
76
+ value: 44.7
77
+ - task:
78
+ name: Translation srp_Latn-eng
79
+ type: translation
80
+ args: srp_Latn-eng
81
+ dataset:
82
+ name: tatoeba-test-v2021-08-07
83
+ type: tatoeba_mt
84
+ args: srp_Latn-eng
85
+ metrics:
86
+ - name: BLEU
87
+ type: bleu
88
+ value: 58.4
89
+ ---
90
+ # opus-mt-tc-big-sh-en
91
+
92
+ Neural machine translation model for translating from Serbo-Croatian (sh) to English (en).
93
+
94
+ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
95
+
96
+ * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
97
+
98
+ ```
99
+ @inproceedings{tiedemann-thottingal-2020-opus,
100
+ title = "{OPUS}-{MT} {--} Building open translation services for the World",
101
+ author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
102
+ booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
103
+ month = nov,
104
+ year = "2020",
105
+ address = "Lisboa, Portugal",
106
+ publisher = "European Association for Machine Translation",
107
+ url = "https://aclanthology.org/2020.eamt-1.61",
108
+ pages = "479--480",
109
+ }
110
+
111
+ @inproceedings{tiedemann-2020-tatoeba,
112
+ title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
113
+ author = {Tiedemann, J{\"o}rg},
114
+ booktitle = "Proceedings of the Fifth Conference on Machine Translation",
115
+ month = nov,
116
+ year = "2020",
117
+ address = "Online",
118
+ publisher = "Association for Computational Linguistics",
119
+ url = "https://aclanthology.org/2020.wmt-1.139",
120
+ pages = "1174--1182",
121
+ }
122
+ ```
123
+
124
+ ## Model info
125
+
126
+ * Release: 2022-02-25
127
+ * source language(s): bos_Latn hrv srp_Cyrl srp_Latn
128
+ * target language(s): eng
129
+ * model: transformer-big
130
+ * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
131
+ * tokenization: SentencePiece (spm32k,spm32k)
132
+ * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.zip)
133
+ * more information released models: [OPUS-MT hbs-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-eng/README.md)
134
+
135
+ ## Usage
136
+
137
+ A short example code:
138
+
139
+ ```python
140
+ from transformers import MarianMTModel, MarianTokenizer
141
+
142
+ src_text = [
143
+ "Ispostavilo se da je istina.",
144
+ "Ovaj vikend imamo besplatne pozive."
145
+ ]
146
+
147
+ model_name = "pytorch-models/opus-mt-tc-big-sh-en"
148
+ tokenizer = MarianTokenizer.from_pretrained(model_name)
149
+ model = MarianMTModel.from_pretrained(model_name)
150
+ translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
151
+
152
+ for t in translated:
153
+ print( tokenizer.decode(t, skip_special_tokens=True) )
154
+
155
+ # expected output:
156
+ # Turns out it's true.
157
+ # We got free calls this weekend.
158
+ ```
159
+
160
+ You can also use OPUS-MT models with the transformers pipelines, for example:
161
+
162
+ ```python
163
+ from transformers import pipeline
164
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-sh-en")
165
+ print(pipe("Ispostavilo se da je istina."))
166
+
167
+ # expected output: Turns out it's true.
168
+ ```
169
+
170
+ ## Benchmarks
171
+
172
+ * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt)
173
+ * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-eng/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt)
174
+ * benchmark results: [benchmark_results.txt](benchmark_results.txt)
175
+ * benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
176
+
177
+ | langpair | testset | chr-F | BLEU | #sent | #words |
178
+ |----------|---------|-------|-------|-------|--------|
179
+ | bos_Latn-eng | tatoeba-test-v2021-08-07 | 0.80010 | 66.5 | 301 | 1826 |
180
+ | hbs-eng | tatoeba-test-v2021-08-07 | 0.71744 | 56.4 | 10017 | 68934 |
181
+ | hrv-eng | tatoeba-test-v2021-08-07 | 0.73563 | 58.8 | 1480 | 10620 |
182
+ | srp_Cyrl-eng | tatoeba-test-v2021-08-07 | 0.68248 | 44.7 | 1580 | 10181 |
183
+ | srp_Latn-eng | tatoeba-test-v2021-08-07 | 0.71781 | 58.4 | 6656 | 46307 |
184
+ | hrv-eng | flores101-devtest | 0.63948 | 37.1 | 1012 | 24721 |
185
+
186
+ ## Acknowledgements
187
+
188
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
189
+
190
+ ## Model conversion info
191
+
192
+ * transformers version: 4.16.2
193
+ * OPUS-MT git hash: 3405783
194
+ * port time: Wed Apr 13 19:21:10 EEST 2022
195
+ * port machine: LM0-400-22516.local
benchmark_results.txt ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hrv-eng flores101-dev 0.64056 36.9 997 23555
2
+ hrv-eng flores101-devtest 0.63948 37.1 1012 24721
3
+ bos_Latn-eng tatoeba-test-v2020-07-28 0.80135 66.7 300 1820
4
+ hbs-eng tatoeba-test-v2020-07-28 0.71728 56.4 10000 68840
5
+ hrv-eng tatoeba-test-v2020-07-28 0.73475 58.7 1468 10556
6
+ srp_Cyrl-eng tatoeba-test-v2020-07-28 0.68232 44.7 1577 10163
7
+ srp_Latn-eng tatoeba-test-v2020-07-28 0.71785 58.4 6655 46301
8
+ bos_Latn-eng tatoeba-test-v2021-03-30 0.80135 66.7 300 1820
9
+ hbs-eng tatoeba-test-v2021-03-30 0.71727 56.4 10002 68852
10
+ hrv-eng tatoeba-test-v2021-03-30 0.73469 58.7 1469 10562
11
+ srp_Cyrl-eng tatoeba-test-v2021-03-30 0.68232 44.7 1577 10163
12
+ srp_Latn-eng tatoeba-test-v2021-03-30 0.71786 58.4 6656 46307
13
+ bos_Latn-eng tatoeba-test-v2021-08-07 0.80010 66.5 301 1826
14
+ hbs-eng tatoeba-test-v2021-08-07 0.71744 56.4 10017 68934
15
+ hrv-eng tatoeba-test-v2021-08-07 0.73563 58.8 1480 10620
16
+ srp_Cyrl-eng tatoeba-test-v2021-08-07 0.68248 44.7 1580 10181
17
+ srp_Latn-eng tatoeba-test-v2021-08-07 0.71781 58.4 6656 46307
benchmark_translations.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c71bfadcb9da2a129195fab777dad08b24903db4fdcab995ca9733287a6ee102
3
+ size 2210970
config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "activation_function": "relu",
4
+ "architectures": [
5
+ "MarianMTModel"
6
+ ],
7
+ "attention_dropout": 0.0,
8
+ "bad_words_ids": [
9
+ [
10
+ 58922
11
+ ]
12
+ ],
13
+ "bos_token_id": 0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 6,
20
+ "decoder_start_token_id": 58922,
21
+ "decoder_vocab_size": 58923,
22
+ "dropout": 0.1,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 6,
27
+ "eos_token_id": 37807,
28
+ "forced_eos_token_id": 37807,
29
+ "init_std": 0.02,
30
+ "is_encoder_decoder": true,
31
+ "max_length": 512,
32
+ "max_position_embeddings": 1024,
33
+ "model_type": "marian",
34
+ "normalize_embedding": false,
35
+ "num_beams": 4,
36
+ "num_hidden_layers": 6,
37
+ "pad_token_id": 58922,
38
+ "scale_embedding": true,
39
+ "share_encoder_decoder_embeddings": true,
40
+ "static_position_embeddings": true,
41
+ "torch_dtype": "float16",
42
+ "transformers_version": "4.18.0.dev0",
43
+ "use_cache": true,
44
+ "vocab_size": 58923
45
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed143513a5c68547347e787e45cf0b8ff7965e6efb414c17d14a42958e9d4e5c
3
+ size 594272899
source.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04edd3b1ef274a674fad5bd278165e20db3b9190af3ed93ecc208546fc18c813
3
+ size 848531
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
target.spm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8162fb45022dc045c39eccfc02274ce988336f531a38eb3c892bd95692d7ae5
3
+ size 794188
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"source_lang": "sh", "target_lang": "en", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20210807+bt_transformer-big_2022-02-25/sh-en", "tokenizer_class": "MarianTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff