Initial commit
Browse files
README.md
CHANGED
@@ -114,6 +114,7 @@ This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus
|
|
114 |
- More information about released models for this language pair: [OPUS-MT afa-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md)
|
115 |
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
|
116 |
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
|
|
|
117 |
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
|
118 |
|
119 |
## Uses
|
@@ -171,6 +172,7 @@ print(pipe("لا أريد أن أبدو غبيّا."))
|
|
171 |
|
172 |
## Evaluation
|
173 |
|
|
|
174 |
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
|
175 |
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
|
176 |
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
|
@@ -184,7 +186,7 @@ print(pipe("لا أريد أن أبدو غبيّا."))
|
|
184 |
|
185 |
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
|
186 |
|
187 |
-
```
|
188 |
@article{tiedemann2023democratizing,
|
189 |
title={Democratizing neural machine translation with {OPUS-MT}},
|
190 |
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
|
@@ -224,11 +226,11 @@ print(pipe("لا أريد أن أبدو غبيّا."))
|
|
224 |
|
225 |
## Acknowledgements
|
226 |
|
227 |
-
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the EuroHPC supercomputer LUMI
|
228 |
|
229 |
## Model conversion info
|
230 |
|
231 |
* transformers version: 4.45.1
|
232 |
* OPUS-MT git hash: a44ab31
|
233 |
-
* port time: Sun Oct 6
|
234 |
* port machine: LM0-400-22516.local
|
|
|
114 |
- More information about released models for this language pair: [OPUS-MT afa-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md)
|
115 |
- [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian)
|
116 |
- [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/)
|
117 |
+
- [HPLT bilingual data v1 (as part of the Tatoeba Translation Challenge dataset)](https://hplt-project.org/datasets/v1)
|
118 |
- [A massively parallel Bible corpus](https://aclanthology.org/L14-1215/)
|
119 |
|
120 |
## Uses
|
|
|
172 |
|
173 |
## Evaluation
|
174 |
|
175 |
+
* [Model scores at the OPUS-MT dashboard](https://opus.nlpl.eu/dashboard/index.php?pkg=opusmt&test=all&scoreslang=all&chart=standard&model=Tatoeba-MT-models/afa-eng/opusTCv20230926max50%2Bbt%2Bjhubc_transformer-big_2024-08-17)
|
176 |
* test set translations: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.test.txt)
|
177 |
* test set scores: [opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opusTCv20230926max50+bt+jhubc_transformer-big_2024-08-17.eval.txt)
|
178 |
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
|
|
|
186 |
|
187 |
* Publications: [Democratizing neural machine translation with OPUS-MT](https://doi.org/10.1007/s10579-023-09704-w) and [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
|
188 |
|
189 |
+
```bibtex
|
190 |
@article{tiedemann2023democratizing,
|
191 |
title={Democratizing neural machine translation with {OPUS-MT}},
|
192 |
author={Tiedemann, J{\"o}rg and Aulamo, Mikko and Bakshandaeva, Daria and Boggia, Michele and Gr{\"o}nroos, Stig-Arne and Nieminen, Tommi and Raganato, Alessandro and Scherrer, Yves and Vazquez, Raul and Virpioja, Sami},
|
|
|
226 |
|
227 |
## Acknowledgements
|
228 |
|
229 |
+
The work is supported by the [HPLT project](https://hplt-project.org/), funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland, and the [EuroHPC supercomputer LUMI](https://www.lumi-supercomputer.eu/).
|
230 |
|
231 |
## Model conversion info
|
232 |
|
233 |
* transformers version: 4.45.1
|
234 |
* OPUS-MT git hash: a44ab31
|
235 |
+
* port time: Sun Oct 6 21:06:57 EEST 2024
|
236 |
* port machine: LM0-400-22516.local
|