Initial commit
Browse files- .gitattributes +1 -0
- README.md +199 -0
- benchmark_results.txt +1 -0
- benchmark_translations.zip +0 -0
- config.json +41 -0
- generation_config.json +16 -0
- model.safetensors +3 -0
- pytorch_model.bin +3 -0
- source.spm +3 -0
- special_tokens_map.json +1 -0
- target.spm +3 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.spm filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
language:
|
4 |
+
- chm
|
5 |
+
- de
|
6 |
+
- en
|
7 |
+
- es
|
8 |
+
- et
|
9 |
+
- fi
|
10 |
+
- fkv
|
11 |
+
- fr
|
12 |
+
- hu
|
13 |
+
- izh
|
14 |
+
- krl
|
15 |
+
- kv
|
16 |
+
- liv
|
17 |
+
- mdf
|
18 |
+
- mrj
|
19 |
+
- myv
|
20 |
+
- pt
|
21 |
+
- se
|
22 |
+
- sma
|
23 |
+
- smn
|
24 |
+
- udm
|
25 |
+
- vep
|
26 |
+
- vot
|
27 |
+
|
28 |
+
tags:
|
29 |
+
- translation
|
30 |
+
- opus-mt-tc-bible
|
31 |
+
|
32 |
+
license: apache-2.0
|
33 |
+
model-index:
|
34 |
+
- name: opus-mt-tc-bible-big-deu_eng_fra_por_spa-urj
|
35 |
+
results:
|
36 |
+
- task:
|
37 |
+
name: Translation multi-multi
|
38 |
+
type: translation
|
39 |
+
args: multi-multi
|
40 |
+
dataset:
|
41 |
+
name: tatoeba-test-v2020-07-28-v2023-09-26
|
42 |
+
type: tatoeba_mt
|
43 |
+
args: multi-multi
|
44 |
+
metrics:
|
45 |
+
- name: BLEU
|
46 |
+
type: bleu
|
47 |
+
value: 37.1
|
48 |
+
- name: chr-F
|
49 |
+
type: chrf
|
50 |
+
value: 0.61730
|
51 |
+
---
|
52 |
+
# opus-mt-tc-bible-big-deu_eng_fra_por_spa-urj
|
53 |
+
|
54 |
+
## Table of Contents
|
55 |
+
- [Model Details](#model-details)
|
56 |
+
- [Uses](#uses)
|
57 |
+
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
|
58 |
+
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
|
59 |
+
- [Training](#training)
|
60 |
+
- [Evaluation](#evaluation)
|
61 |
+
- [Citation Information](#citation-information)
|
62 |
+
- [Acknowledgements](#acknowledgements)
|
63 |
+
|
64 |
+
## Model Details
|
65 |
+
|
66 |
+
Neural machine translation model for translating from unknown (deu+eng+fra+por+spa) to Uralic languages (urj).
|
67 |
+
|
68 |
+
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
|
69 |
+
**Model Description:**
|
70 |
+
- **Developed by:** Language Technology Research Group at the University of Helsinki
|
71 |
+
- **Model Type:** Translation (transformer-big)
|
72 |
+
- **Release**: 2024-05-30
|
73 |
+
- **License:** Apache-2.0
|
74 |
+
- **Language(s):**
|
75 |
+
- Source Language(s): deu eng fra por spa
|
76 |
+
- Target Language(s): chm est fin fkv hun izh koi kom kpv krl liv mdf mrj myv sma sme smn udm vep vot vro
|
77 |
+
- Valid Target Language Labels: >>chm<< >>enf<< >>enh<< >>est<< >>fin<< >>fit<< >>fkv<< >>fkv_Latn<< >>hun<< >>izh<< >>kca<< >>koi<< >>kom<< >>kpv<< >>krl<< >>liv<< >>liv_Latn<< >>lud<< >>mdf<< >>mns<< >>mrj<< >>mtm<< >>myv<< >>nio<< >>olo<< >>sel<< >>sia<< >>sjd<< >>sje<< >>sjk<< >>sjt<< >>sju<< >>sma<< >>sme<< >>smj<< >>smn<< >>sms<< >>udm<< >>vep<< >>vot<< >>vot_Latn<< >>vro<< >>xas<< >>yrk<<
|
78 |
+
- **Original Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-urj/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
|
79 |
+
- **Resources for more information:**
|
80 |
+
- [OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-urj/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
|
81 |
+
- [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
|
82 |
+
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
|
83 |
+
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
|
84 |
+
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
|
85 |
+
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
|
86 |
+
|
87 |
+
This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>chm<<`
|
88 |
+
|
89 |
+
## Uses
|
90 |
+
|
91 |
+
This model can be used for translation and text-to-text generation.
|
92 |
+
|
93 |
+
## Risks, Limitations and Biases
|
94 |
+
|
95 |
+
**CONTENT WARNING: Readers should be aware that the model is trained on various public data sets that may contain content that is disturbing, offensive, and can propagate historical and current stereotypes.**
|
96 |
+
|
97 |
+
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
98 |
+
|
99 |
+
## How to Get Started With the Model
|
100 |
+
|
101 |
+
A short example code:
|
102 |
+
|
103 |
+
```python
|
104 |
+
from transformers import MarianMTModel, MarianTokenizer
|
105 |
+
|
106 |
+
src_text = [
|
107 |
+
">>chm<< Replace this with text in an accepted source language.",
|
108 |
+
">>vro<< This is the second sentence."
|
109 |
+
]
|
110 |
+
|
111 |
+
model_name = "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-urj"
|
112 |
+
tokenizer = MarianTokenizer.from_pretrained(model_name)
|
113 |
+
model = MarianMTModel.from_pretrained(model_name)
|
114 |
+
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
|
115 |
+
|
116 |
+
for t in translated:
|
117 |
+
print( tokenizer.decode(t, skip_special_tokens=True) )
|
118 |
+
```
|
119 |
+
|
120 |
+
You can also use OPUS-MT models with the transformers pipelines, for example:
|
121 |
+
|
122 |
+
```python
|
123 |
+
from transformers import pipeline
|
124 |
+
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-bible-big-deu_eng_fra_por_spa-urj")
|
125 |
+
print(pipe(">>chm<< Replace this with text in an accepted source language."))
|
126 |
+
```
|
127 |
+
|
128 |
+
## Training
|
129 |
+
|
130 |
+
- **Data**: opusTCv20230926max50+bt+jhubc ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
|
131 |
+
- **Pre-processing**: SentencePiece (spm32k,spm32k)
|
132 |
+
- **Model Type:** transformer-big
|
133 |
+
- **Original MarianNMT Model**: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-urj/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30.zip)
|
134 |
+
- **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
|
135 |
+
|
136 |
+
## Evaluation
|
137 |
+
|
138 |
+
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/deu%2Beng%2Bfra%2Bpor%2Bspa-urj/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-05-30)
|
139 |
+
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-urj/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.test.txt)
|
140 |
+
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu+eng+fra+por+spa-urj/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-29.eval.txt)
|
141 |
+
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
|
142 |
+
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
|
143 |
+
|
144 |
+
| langpair | testset | chr-F | BLEU | #sent | #words |
|
145 |
+
|----------|---------|-------|-------|-------|--------|
|
146 |
+
| multi-multi | tatoeba-test-v2020-07-28-v2023-09-26 | 0.61730 | 37.1 | 10000 | 64457 |
|
147 |
+
|
148 |
+
## Citation Information
|
149 |
+
|
150 |
+
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
|
151 |
+
|
152 |
+
```bibtex
|
153 |
+
@article{tiedemann2023democratizing,
|
154 |
+
title={Democratizing neural machine translation with {OPUS-MT}},
|
155 |
+
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
|
156 |
+
journal={Language Resources and Evaluation},
|
157 |
+
number={58},
|
158 |
+
pages={713--755},
|
159 |
+
year={2023},
|
160 |
+
publisher={Springer Nature},
|
161 |
+
issn={1574-0218},
|
162 |
+
doi={10.1007/s10579-023-09704-w}
|
163 |
+
}
|
164 |
+
|
165 |
+
@inproceedings{tiedemann-thottingal-2020-opus,
|
166 |
+
title = "{OPUS}-{MT} {--} Building open translation services for the World",
|
167 |
+
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
|
168 |
+
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
|
169 |
+
month = nov,
|
170 |
+
year = "2020",
|
171 |
+
address = "Lisboa, Portugal",
|
172 |
+
publisher = "European Association for Machine Translation",
|
173 |
+
url = "https://aclanthology.org/2020.eamt-1.61",
|
174 |
+
pages = "479--480",
|
175 |
+
}
|
176 |
+
|
177 |
+
@inproceedings{tiedemann-2020-tatoeba,
|
178 |
+
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
|
179 |
+
author = {Tiedemann, J{\"o}rg},
|
180 |
+
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
|
181 |
+
month = nov,
|
182 |
+
year = "2020",
|
183 |
+
address = "Online",
|
184 |
+
publisher = "Association for Computational Linguistics",
|
185 |
+
url = "https://aclanthology.org/2020.wmt-1.139",
|
186 |
+
pages = "1174--1182",
|
187 |
+
}
|
188 |
+
```
|
189 |
+
|
190 |
+
## Acknowledgements
|
191 |
+
|
192 |
+
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
|
193 |
+
|
194 |
+
## Model conversion info
|
195 |
+
|
196 |
+
* transformers version: 4.45.1
|
197 |
+
* OPUS-MT git hash: 0882077
|
198 |
+
* port time: Tue Oct 8 10:42:04 EEST 2024
|
199 |
+
* port machine: LM0-400-22516.local
|
benchmark_results.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
multi-multi tatoeba-test-v2020-07-28-v2023-09-26 0.61730 37.1 10000 64457
|
benchmark_translations.zip
ADDED
File without changes
|
config.json
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "pytorch-models/opus-mt-tc-bible-big-deu_eng_fra_por_spa-urj",
|
3 |
+
"activation_dropout": 0.0,
|
4 |
+
"activation_function": "relu",
|
5 |
+
"architectures": [
|
6 |
+
"MarianMTModel"
|
7 |
+
],
|
8 |
+
"attention_dropout": 0.0,
|
9 |
+
"bos_token_id": 0,
|
10 |
+
"classifier_dropout": 0.0,
|
11 |
+
"d_model": 1024,
|
12 |
+
"decoder_attention_heads": 16,
|
13 |
+
"decoder_ffn_dim": 4096,
|
14 |
+
"decoder_layerdrop": 0.0,
|
15 |
+
"decoder_layers": 6,
|
16 |
+
"decoder_start_token_id": 59383,
|
17 |
+
"decoder_vocab_size": 59384,
|
18 |
+
"dropout": 0.1,
|
19 |
+
"encoder_attention_heads": 16,
|
20 |
+
"encoder_ffn_dim": 4096,
|
21 |
+
"encoder_layerdrop": 0.0,
|
22 |
+
"encoder_layers": 6,
|
23 |
+
"eos_token_id": 614,
|
24 |
+
"forced_eos_token_id": null,
|
25 |
+
"init_std": 0.02,
|
26 |
+
"is_encoder_decoder": true,
|
27 |
+
"max_length": null,
|
28 |
+
"max_position_embeddings": 1024,
|
29 |
+
"model_type": "marian",
|
30 |
+
"normalize_embedding": false,
|
31 |
+
"num_beams": null,
|
32 |
+
"num_hidden_layers": 6,
|
33 |
+
"pad_token_id": 59383,
|
34 |
+
"scale_embedding": true,
|
35 |
+
"share_encoder_decoder_embeddings": true,
|
36 |
+
"static_position_embeddings": true,
|
37 |
+
"torch_dtype": "float32",
|
38 |
+
"transformers_version": "4.45.1",
|
39 |
+
"use_cache": true,
|
40 |
+
"vocab_size": 59384
|
41 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bad_words_ids": [
|
4 |
+
[
|
5 |
+
59383
|
6 |
+
]
|
7 |
+
],
|
8 |
+
"bos_token_id": 0,
|
9 |
+
"decoder_start_token_id": 59383,
|
10 |
+
"eos_token_id": 614,
|
11 |
+
"forced_eos_token_id": 614,
|
12 |
+
"max_length": 512,
|
13 |
+
"num_beams": 4,
|
14 |
+
"pad_token_id": 59383,
|
15 |
+
"transformers_version": "4.45.1"
|
16 |
+
}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:506d9f268c3d49aa1369aca021e3876f4cad6ec8b2c79ac1d8ace5ae4cf16488
|
3 |
+
size 948933520
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8b252117a69bfed9064ba83e450ff9b3f2c3d7f535c2ba44fe73f161890747e6
|
3 |
+
size 948984773
|
source.spm
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:900e0c65b7faf46cd08dbc723d3a0b7214075d7a5f04a20a9eadbf4d112ceea7
|
3 |
+
size 811251
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
|
target.spm
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:36698d7e29e0a3dd2fa01bce3f49402eda7fffb62a5cdd4f1c60d857d2a95dd5
|
3 |
+
size 821630
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"source_lang": "deu+eng+fra+por+spa", "target_lang": "urj", "unk_token": "<unk>", "eos_token": "</s>", "pad_token": "<pad>", "model_max_length": 512, "sp_model_kwargs": {}, "separate_vocabs": false, "special_tokens_map_file": null, "name_or_path": "marian-models/opusTCv20230926max50+bt+jhubc_transformer-big_2024-05-30/deu+eng+fra+por+spa-urj", "tokenizer_class": "MarianTokenizer"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|