Add multilingual to the language tag

#2
by lbourdois - opened
Files changed (1) hide show
  1. README.md +18 -21
README.md CHANGED
@@ -2,61 +2,58 @@
2
  language:
3
  - en
4
  - et
 
 
5
  tags:
6
  - translation
7
  - opus-mt-tc
8
- license: cc-by-4.0
9
  model-index:
10
  - name: opus-mt-tc-big-et-en
11
  results:
12
  - task:
13
- name: Translation est-eng
14
  type: translation
15
- args: est-eng
16
  dataset:
17
  name: flores101-devtest
18
  type: flores_101
19
  args: est eng devtest
20
  metrics:
21
- - name: BLEU
22
- type: bleu
23
  value: 38.6
 
24
  - task:
25
- name: Translation est-eng
26
  type: translation
27
- args: est-eng
28
  dataset:
29
  name: newsdev2018
30
  type: newsdev2018
31
  args: est-eng
32
  metrics:
33
- - name: BLEU
34
- type: bleu
35
  value: 33.8
 
36
  - task:
37
- name: Translation est-eng
38
  type: translation
39
- args: est-eng
40
  dataset:
41
  name: tatoeba-test-v2021-08-07
42
  type: tatoeba_mt
43
  args: est-eng
44
  metrics:
45
- - name: BLEU
46
- type: bleu
47
  value: 59.7
 
48
  - task:
49
- name: Translation est-eng
50
  type: translation
51
- args: est-eng
52
  dataset:
53
  name: newstest2018
54
  type: wmt-2018-news
55
  args: est-eng
56
  metrics:
57
- - name: BLEU
58
- type: bleu
59
  value: 34.3
 
60
  ---
61
  # opus-mt-tc-big-et-en
62
 
@@ -64,7 +61,7 @@ Neural machine translation model for translating from Estonian (et) to English (
64
 
65
  This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
66
 
67
- * Publications: [OPUS-MT Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
68
 
69
  ```
70
  @inproceedings{tiedemann-thottingal-2020-opus,
@@ -112,7 +109,7 @@ from transformers import MarianMTModel, MarianTokenizer
112
 
113
  src_text = [
114
  "Takso ootab.",
115
- "Kon sa elät?"
116
  ]
117
 
118
  model_name = "pytorch-models/opus-mt-tc-big-et-en"
@@ -125,7 +122,7 @@ for t in translated:
125
 
126
  # expected output:
127
  # Taxi's waiting.
128
- # Kon you elät?
129
  ```
130
 
131
  You can also use OPUS-MT models with the transformers pipelines, for example:
@@ -154,7 +151,7 @@ print(pipe("Takso ootab."))
154
 
155
  ## Acknowledgements
156
 
157
- The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unions Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
158
 
159
  ## Model conversion info
160
 
 
2
  language:
3
  - en
4
  - et
5
+ - multilingual
6
+ license: cc-by-4.0
7
  tags:
8
  - translation
9
  - opus-mt-tc
 
10
  model-index:
11
  - name: opus-mt-tc-big-et-en
12
  results:
13
  - task:
 
14
  type: translation
15
+ name: Translation est-eng
16
  dataset:
17
  name: flores101-devtest
18
  type: flores_101
19
  args: est eng devtest
20
  metrics:
21
+ - type: bleu
 
22
  value: 38.6
23
+ name: BLEU
24
  - task:
 
25
  type: translation
26
+ name: Translation est-eng
27
  dataset:
28
  name: newsdev2018
29
  type: newsdev2018
30
  args: est-eng
31
  metrics:
32
+ - type: bleu
 
33
  value: 33.8
34
+ name: BLEU
35
  - task:
 
36
  type: translation
37
+ name: Translation est-eng
38
  dataset:
39
  name: tatoeba-test-v2021-08-07
40
  type: tatoeba_mt
41
  args: est-eng
42
  metrics:
43
+ - type: bleu
 
44
  value: 59.7
45
+ name: BLEU
46
  - task:
 
47
  type: translation
48
+ name: Translation est-eng
49
  dataset:
50
  name: newstest2018
51
  type: wmt-2018-news
52
  args: est-eng
53
  metrics:
54
+ - type: bleu
 
55
  value: 34.3
56
+ name: BLEU
57
  ---
58
  # opus-mt-tc-big-et-en
59
 
 
61
 
62
  This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
63
 
64
+ * Publications: [OPUS-MT Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
65
 
66
  ```
67
  @inproceedings{tiedemann-thottingal-2020-opus,
 
109
 
110
  src_text = [
111
  "Takso ootab.",
112
+ "Kon sa el�t?"
113
  ]
114
 
115
  model_name = "pytorch-models/opus-mt-tc-big-et-en"
 
122
 
123
  # expected output:
124
  # Taxi's waiting.
125
+ # Kon you el�t?
126
  ```
127
 
128
  You can also use OPUS-MT models with the transformers pipelines, for example:
 
151
 
152
  ## Acknowledgements
153
 
154
+ The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unions Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
155
 
156
  ## Model conversion info
157