modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
sequence | pipeline_tag
stringclasses 21
values | files
sequence | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
Helsinki-NLP/opus-mt-aed-es | 2021-01-18T07:45:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"aed",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 59 | transformers | ---
tags:
- translation
---
### opus-mt-aed-es
* source languages: aed
* target languages: es
* OPUS readme: [aed-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/aed-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.aed.es | 89.1 | 0.915 |
|
Helsinki-NLP/opus-mt-af-de | 2021-01-18T07:46:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
tags:
- translation
---
### opus-mt-af-de
* source languages: af
* target languages: de
* OPUS readme: [af-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-19.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.zip)
* test set translations: [opus-2020-01-19.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.test.txt)
* test set scores: [opus-2020-01-19.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.af.de | 48.6 | 0.681 |
|
Helsinki-NLP/opus-mt-af-en | 2021-01-18T07:46:04.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 264 | transformers | ---
tags:
- translation
---
### opus-mt-af-en
* source languages: af
* target languages: en
* OPUS readme: [af-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.af.en | 60.8 | 0.736 |
|
Helsinki-NLP/opus-mt-af-eo | 2021-01-18T07:46:08.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
language:
- af
- eo
tags:
- translation
license: apache-2.0
---
### afr-epo
* source group: Afrikaans
* target group: Esperanto
* OPUS readme: [afr-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.epo | 18.3 | 0.411 |
### System Info:
- hf_name: afr-epo
- source_languages: afr
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'eo']
- src_constituents: {'afr'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt
- src_alpha3: afr
- tgt_alpha3: epo
- short_pair: af-eo
- chrF2_score: 0.41100000000000003
- bleu: 18.3
- brevity_penalty: 0.995
- ref_len: 7517.0
- src_name: Afrikaans
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: af
- tgt_alpha2: eo
- prefer_old: False
- long_pair: afr-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-af-es | 2021-01-18T07:46:14.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
language:
- af
- es
tags:
- translation
license: apache-2.0
---
### afr-spa
* source group: Afrikaans
* target group: Spanish
* OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.spa | 49.9 | 0.680 |
### System Info:
- hf_name: afr-spa
- source_languages: afr
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'es']
- src_constituents: {'afr'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: spa
- short_pair: af-es
- chrF2_score: 0.68
- bleu: 49.9
- brevity_penalty: 1.0
- ref_len: 2783.0
- src_name: Afrikaans
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: es
- prefer_old: False
- long_pair: afr-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-af-fi | 2021-01-18T07:46:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 32 | transformers | ---
tags:
- translation
---
### opus-mt-af-fi
* source languages: af
* target languages: fi
* OPUS readme: [af-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fi | 32.3 | 0.576 |
|
Helsinki-NLP/opus-mt-af-fr | 2021-01-18T07:46:24.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-af-fr
* source languages: af
* target languages: fr
* OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.fr | 35.3 | 0.543 |
|
Helsinki-NLP/opus-mt-af-nl | 2021-01-18T07:46:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 27 | transformers | ---
language:
- af
- nl
tags:
- translation
license: apache-2.0
---
### afr-nld
* source group: Afrikaans
* target group: Dutch
* OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.nld | 55.2 | 0.715 |
### System Info:
- hf_name: afr-nld
- source_languages: afr
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'nl']
- src_constituents: {'afr'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: nld
- short_pair: af-nl
- chrF2_score: 0.715
- bleu: 55.2
- brevity_penalty: 0.995
- ref_len: 6710.0
- src_name: Afrikaans
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: nl
- prefer_old: False
- long_pair: afr-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-af-ru | 2021-01-18T07:46:32.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 33 | transformers | ---
language:
- af
- ru
tags:
- translation
license: apache-2.0
---
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-af-sv | 2021-01-18T07:46:36.000Z | [
"pytorch",
"marian",
"seq2seq",
"af",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 37 | transformers | ---
tags:
- translation
---
### opus-mt-af-sv
* source languages: af
* target languages: sv
* OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.af.sv | 40.4 | 0.599 |
|
Helsinki-NLP/opus-mt-afa-afa | 2021-01-18T07:46:40.000Z | [
"pytorch",
"marian",
"seq2seq",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 65 | transformers | ---
language:
- so
- ti
- am
- he
- mt
- ar
- afa
tags:
- translation
license: apache-2.0
---
### afa-afa
* source group: Afro-Asiatic languages
* target group: Afro-Asiatic languages
* OPUS readme: [afa-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md)
* model: transformer
* source language(s): apc ara arq arz heb kab mlt shy_Latn thv
* target language(s): apc ara arq arz heb kab mlt shy_Latn thv
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.3 | 0.148 |
| Tatoeba-test.ara-heb.ara.heb | 31.9 | 0.525 |
| Tatoeba-test.ara-kab.ara.kab | 0.3 | 0.120 |
| Tatoeba-test.ara-mlt.ara.mlt | 14.0 | 0.428 |
| Tatoeba-test.ara-shy.ara.shy | 1.3 | 0.050 |
| Tatoeba-test.heb-ara.heb.ara | 17.0 | 0.464 |
| Tatoeba-test.heb-kab.heb.kab | 1.9 | 0.104 |
| Tatoeba-test.kab-ara.kab.ara | 0.3 | 0.044 |
| Tatoeba-test.kab-heb.kab.heb | 5.1 | 0.099 |
| Tatoeba-test.kab-shy.kab.shy | 2.2 | 0.009 |
| Tatoeba-test.kab-tmh.kab.tmh | 10.7 | 0.007 |
| Tatoeba-test.mlt-ara.mlt.ara | 29.1 | 0.498 |
| Tatoeba-test.multi.multi | 20.8 | 0.434 |
| Tatoeba-test.shy-ara.shy.ara | 1.2 | 0.053 |
| Tatoeba-test.shy-kab.shy.kab | 2.0 | 0.134 |
| Tatoeba-test.tmh-kab.tmh.kab | 0.0 | 0.047 |
### System Info:
- hf_name: afa-afa
- source_languages: afa
- target_languages: afa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt
- src_alpha3: afa
- tgt_alpha3: afa
- short_pair: afa-afa
- chrF2_score: 0.434
- bleu: 20.8
- brevity_penalty: 1.0
- ref_len: 15215.0
- src_name: Afro-Asiatic languages
- tgt_name: Afro-Asiatic languages
- train_date: 2020-07-26
- src_alpha2: afa
- tgt_alpha2: afa
- prefer_old: False
- long_pair: afa-afa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-afa-en | 2021-01-18T07:46:45.000Z | [
"pytorch",
"marian",
"seq2seq",
"so",
"ti",
"am",
"he",
"mt",
"ar",
"afa",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
language:
- so
- ti
- am
- he
- mt
- ar
- afa
- en
tags:
- translation
license: apache-2.0
---
### afa-eng
* source group: Afro-Asiatic languages
* target group: English
* OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md)
* model: transformer
* source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 |
| Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 |
| Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 |
| Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 |
| Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 |
| Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 |
| Tatoeba-test.multi.eng | 27.1 | 0.464 |
| Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 |
| Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 |
### System Info:
- hf_name: afa-eng
- source_languages: afa
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']
- src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt
- src_alpha3: afa
- tgt_alpha3: eng
- short_pair: afa-en
- chrF2_score: 0.46399999999999997
- bleu: 27.1
- brevity_penalty: 1.0
- ref_len: 69373.0
- src_name: Afro-Asiatic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: afa
- tgt_alpha2: en
- prefer_old: False
- long_pair: afa-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-alv-en | 2021-01-18T07:46:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"sn",
"rw",
"wo",
"ig",
"sg",
"ee",
"zu",
"lg",
"ts",
"ln",
"ny",
"yo",
"rn",
"xh",
"alv",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- alv
- en
tags:
- translation
license: apache-2.0
---
### alv-eng
* source group: Atlantic-Congo languages
* target group: English
* OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md)
* model: transformer
* source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 |
| Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 |
| Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 |
| Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 |
| Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 |
| Tatoeba-test.multi.eng | 20.9 | 0.376 |
| Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 |
| Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 |
| Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 |
| Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 |
| Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 |
| Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 |
| Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 |
| Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 |
| Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 |
| Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 |
| Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 |
### System Info:
- hf_name: alv-eng
- source_languages: alv
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
- src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt
- src_alpha3: alv
- tgt_alpha3: eng
- short_pair: alv-en
- chrF2_score: 0.376
- bleu: 20.9
- brevity_penalty: 1.0
- ref_len: 15208.0
- src_name: Atlantic-Congo languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: alv
- tgt_alpha2: en
- prefer_old: False
- long_pair: alv-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-am-sv | 2021-01-18T07:46:55.000Z | [
"pytorch",
"marian",
"seq2seq",
"am",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 51 | transformers | ---
tags:
- translation
---
### opus-mt-am-sv
* source languages: am
* target languages: sv
* OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.am.sv | 21.0 | 0.377 |
|
Helsinki-NLP/opus-mt-ar-de | 2021-01-18T07:47:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"de",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 113 | transformers | ---
language:
- ar
- de
tags:
- translation
license: apache-2.0
---
### ara-deu
* source group: Arabic
* target group: German
* OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md)
* model: transformer-align
* source language(s): afb apc ara ara_Latn arq arz
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.deu | 44.7 | 0.629 |
### System Info:
- hf_name: ara-deu
- source_languages: ara
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'de']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: deu
- short_pair: ar-de
- chrF2_score: 0.629
- bleu: 44.7
- brevity_penalty: 0.986
- ref_len: 8371.0
- src_name: Arabic
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: de
- prefer_old: False
- long_pair: ara-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-el | 2021-01-18T07:47:04.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"el",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 32 | transformers | ---
language:
- ar
- el
tags:
- translation
license: apache-2.0
---
### ara-ell
* source group: Arabic
* target group: Modern Greek (1453-)
* OPUS readme: [ara-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md)
* model: transformer-align
* source language(s): ara arz
* target language(s): ell
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.ell | 43.9 | 0.636 |
### System Info:
- hf_name: ara-ell
- source_languages: ara
- target_languages: ell
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'el']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'ell'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: ell
- short_pair: ar-el
- chrF2_score: 0.636
- bleu: 43.9
- brevity_penalty: 0.993
- ref_len: 2009.0
- src_name: Arabic
- tgt_name: Modern Greek (1453-)
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: el
- prefer_old: False
- long_pair: ara-ell
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-en | 2021-02-28T14:25:01.000Z | [
"pytorch",
"rust",
"marian",
"seq2seq",
"ar",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"rust_model.ot",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 1,989 | transformers | ---
tags:
- translation
---
### opus-mt-ar-en
* source languages: ar
* target languages: en
* OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ar.en | 49.4 | 0.661 |
|
Helsinki-NLP/opus-mt-ar-eo | 2021-01-18T07:47:12.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 36 | transformers | ---
language:
- ar
- eo
tags:
- translation
license: apache-2.0
---
### ara-epo
* source group: Arabic
* target group: Esperanto
* OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md)
* model: transformer-align
* source language(s): apc apc_Latn ara arq arq_Latn arz
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.epo | 18.9 | 0.376 |
### System Info:
- hf_name: ara-epo
- source_languages: ara
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'eo']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt
- src_alpha3: ara
- tgt_alpha3: epo
- short_pair: ar-eo
- chrF2_score: 0.376
- bleu: 18.9
- brevity_penalty: 0.948
- ref_len: 4506.0
- src_name: Arabic
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: ar
- tgt_alpha2: eo
- prefer_old: False
- long_pair: ara-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-es | 2021-01-18T07:47:16.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 24 | transformers | ---
language:
- ar
- es
tags:
- translation
license: apache-2.0
---
### ara-spa
* source group: Arabic
* target group: Spanish
* OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.spa | 46.0 | 0.641 |
### System Info:
- hf_name: ara-spa
- source_languages: ara
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'es']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: spa
- short_pair: ar-es
- chrF2_score: 0.6409999999999999
- bleu: 46.0
- brevity_penalty: 0.9620000000000001
- ref_len: 9708.0
- src_name: Arabic
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: es
- prefer_old: False
- long_pair: ara-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-fr | 2021-01-18T07:47:20.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 207 | transformers | ---
tags:
- translation
---
### opus-mt-ar-fr
* source languages: ar
* target languages: fr
* OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ar.fr | 43.5 | 0.602 |
|
Helsinki-NLP/opus-mt-ar-he | 2021-01-18T07:47:25.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"he",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 25 | transformers | ---
language:
- ar
- he
tags:
- translation
license: apache-2.0
---
### ara-heb
* source group: Arabic
* target group: Hebrew
* OPUS readme: [ara-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md)
* model: transformer
* source language(s): apc apc_Latn ara arq arz
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.heb | 40.4 | 0.605 |
### System Info:
- hf_name: ara-heb
- source_languages: ara
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'he']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'heb'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: heb
- short_pair: ar-he
- chrF2_score: 0.605
- bleu: 40.4
- brevity_penalty: 1.0
- ref_len: 6801.0
- src_name: Arabic
- tgt_name: Hebrew
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: he
- prefer_old: False
- long_pair: ara-heb
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-it | 2021-01-18T07:47:30.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"it",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 28 | transformers | ---
language:
- ar
- it
tags:
- translation
license: apache-2.0
---
### ara-ita
* source group: Arabic
* target group: Italian
* OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md)
* model: transformer
* source language(s): ara
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.ita | 44.2 | 0.658 |
### System Info:
- hf_name: ara-ita
- source_languages: ara
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'it']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: ita
- short_pair: ar-it
- chrF2_score: 0.6579999999999999
- bleu: 44.2
- brevity_penalty: 0.9890000000000001
- ref_len: 1495.0
- src_name: Arabic
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: it
- prefer_old: False
- long_pair: ara-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-pl | 2021-01-18T07:47:35.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"pl",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 28 | transformers | ---
language:
- ar
- pl
tags:
- translation
license: apache-2.0
---
### ara-pol
* source group: Arabic
* target group: Polish
* OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md)
* model: transformer
* source language(s): ara arz
* target language(s): pol
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.pol | 38.0 | 0.623 |
### System Info:
- hf_name: ara-pol
- source_languages: ara
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'pl']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: pol
- short_pair: ar-pl
- chrF2_score: 0.623
- bleu: 38.0
- brevity_penalty: 0.948
- ref_len: 1171.0
- src_name: Arabic
- tgt_name: Polish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: pl
- prefer_old: False
- long_pair: ara-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-ru | 2021-01-18T07:47:44.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 27 | transformers | ---
language:
- ar
- ru
tags:
- translation
license: apache-2.0
---
### ara-rus
* source group: Arabic
* target group: Russian
* OPUS readme: [ara-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md)
* model: transformer
* source language(s): apc ara arz
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.rus | 42.5 | 0.605 |
### System Info:
- hf_name: ara-rus
- source_languages: ara
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'ru']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: rus
- short_pair: ar-ru
- chrF2_score: 0.605
- bleu: 42.5
- brevity_penalty: 0.97
- ref_len: 21830.0
- src_name: Arabic
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: ru
- prefer_old: False
- long_pair: ara-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ar-tr | 2021-01-18T07:47:51.000Z | [
"pytorch",
"marian",
"seq2seq",
"ar",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 34 | transformers | ---
language:
- ar
- tr
tags:
- translation
license: apache-2.0
---
### ara-tur
* source group: Arabic
* target group: Turkish
* OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md)
* model: transformer
* source language(s): apc_Latn ara ara_Latn arq_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara.tur | 33.1 | 0.619 |
### System Info:
- hf_name: ara-tur
- source_languages: ara
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ar', 'tr']
- src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt
- src_alpha3: ara
- tgt_alpha3: tur
- short_pair: ar-tr
- chrF2_score: 0.619
- bleu: 33.1
- brevity_penalty: 0.9570000000000001
- ref_len: 6949.0
- src_name: Arabic
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: ar
- tgt_alpha2: tr
- prefer_old: False
- long_pair: ara-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-art-en | 2021-01-18T07:47:57.000Z | [
"pytorch",
"marian",
"seq2seq",
"eo",
"io",
"art",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 32 | transformers | ---
language:
- eo
- io
- art
- en
tags:
- translation
license: apache-2.0
---
### art-eng
* source group: Artificial languages
* target group: English
* OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md)
* model: transformer
* source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 |
| Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 |
| Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 |
| Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 |
| Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 |
| Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 |
| Tatoeba-test.multi.eng | 11.6 | 0.287 |
| Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 |
| Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 |
| Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 |
| Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 |
| Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 |
### System Info:
- hf_name: art-eng
- source_languages: art
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['eo', 'io', 'art', 'en']
- src_constituents: {'sjn_Latn', 'tzl', 'vol_Latn', 'qya', 'tlh_Latn', 'ile_Latn', 'ido_Latn', 'tzl_Latn', 'jbo_Cyrl', 'jbo', 'lfn_Latn', 'nov_Latn', 'dws_Latn', 'ldn_Latn', 'avk_Latn', 'lfn_Cyrl', 'ina_Latn', 'jbo_Latn', 'epo', 'afh_Latn', 'qya_Latn', 'ido'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt
- src_alpha3: art
- tgt_alpha3: eng
- short_pair: art-en
- chrF2_score: 0.287
- bleu: 11.6
- brevity_penalty: 1.0
- ref_len: 73037.0
- src_name: Artificial languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: art
- tgt_alpha2: en
- prefer_old: False
- long_pair: art-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ase-de | 2021-01-18T07:48:04.000Z | [
"pytorch",
"marian",
"seq2seq",
"ase",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 65 | transformers | ---
tags:
- translation
---
### opus-mt-ase-de
* source languages: ase
* target languages: de
* OPUS readme: [ase-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.de | 27.2 | 0.478 |
|
Helsinki-NLP/opus-mt-ase-en | 2021-01-18T07:48:09.000Z | [
"pytorch",
"marian",
"seq2seq",
"ase",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
tags:
- translation
---
### opus-mt-ase-en
* source languages: ase
* target languages: en
* OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.en | 99.5 | 0.997 |
|
Helsinki-NLP/opus-mt-ase-es | 2021-01-18T07:48:16.000Z | [
"pytorch",
"marian",
"seq2seq",
"ase",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
tags:
- translation
---
### opus-mt-ase-es
* source languages: ase
* target languages: es
* OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.es | 31.7 | 0.498 |
|
Helsinki-NLP/opus-mt-ase-fr | 2021-01-18T07:48:22.000Z | [
"pytorch",
"marian",
"seq2seq",
"ase",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 29 | transformers | ---
tags:
- translation
---
### opus-mt-ase-fr
* source languages: ase
* target languages: fr
* OPUS readme: [ase-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.fr | 37.8 | 0.553 |
|
Helsinki-NLP/opus-mt-ase-sv | 2021-01-18T07:48:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"ase",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 24 | transformers | ---
tags:
- translation
---
### opus-mt-ase-sv
* source languages: ase
* target languages: sv
* OPUS readme: [ase-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ase.sv | 39.7 | 0.576 |
|
Helsinki-NLP/opus-mt-az-en | 2021-01-18T07:48:32.000Z | [
"pytorch",
"marian",
"seq2seq",
"az",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 406 | transformers | ---
language:
- az
- en
tags:
- translation
license: apache-2.0
---
### aze-eng
* source group: Azerbaijani
* target group: English
* OPUS readme: [aze-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.eng | 31.9 | 0.490 |
### System Info:
- hf_name: aze-eng
- source_languages: aze
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'en']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-eng/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: eng
- short_pair: az-en
- chrF2_score: 0.49
- bleu: 31.9
- brevity_penalty: 0.997
- ref_len: 16165.0
- src_name: Azerbaijani
- tgt_name: English
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: en
- prefer_old: False
- long_pair: aze-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-az-es | 2021-01-18T07:48:37.000Z | [
"pytorch",
"marian",
"seq2seq",
"az",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 36 | transformers | ---
language:
- az
- es
tags:
- translation
license: apache-2.0
---
### aze-spa
* source group: Azerbaijani
* target group: Spanish
* OPUS readme: [aze-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.spa | 11.8 | 0.346 |
### System Info:
- hf_name: aze-spa
- source_languages: aze
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'es']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-spa/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: spa
- short_pair: az-es
- chrF2_score: 0.34600000000000003
- bleu: 11.8
- brevity_penalty: 1.0
- ref_len: 1144.0
- src_name: Azerbaijani
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: es
- prefer_old: False
- long_pair: aze-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-az-tr | 2021-01-18T07:48:41.000Z | [
"pytorch",
"marian",
"seq2seq",
"az",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 37 | transformers | ---
language:
- az
- tr
tags:
- translation
license: apache-2.0
---
### aze-tur
* source group: Azerbaijani
* target group: Turkish
* OPUS readme: [aze-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md)
* model: transformer-align
* source language(s): aze_Latn
* target language(s): tur
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.aze.tur | 24.4 | 0.529 |
### System Info:
- hf_name: aze-tur
- source_languages: aze
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aze-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['az', 'tr']
- src_constituents: {'aze_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aze-tur/opus-2020-06-16.test.txt
- src_alpha3: aze
- tgt_alpha3: tur
- short_pair: az-tr
- chrF2_score: 0.529
- bleu: 24.4
- brevity_penalty: 0.956
- ref_len: 5380.0
- src_name: Azerbaijani
- tgt_name: Turkish
- train_date: 2020-06-16
- src_alpha2: az
- tgt_alpha2: tr
- prefer_old: False
- long_pair: aze-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bat-en | 2021-01-18T07:48:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"lt",
"lv",
"bat",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
language:
- lt
- lv
- bat
- en
tags:
- translation
license: apache-2.0
---
### bat-eng
* source group: Baltic languages
* target group: English
* OPUS readme: [bat-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md)
* model: transformer
* source language(s): lav lit ltg prg_Latn sgs
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2017-enlv-laveng.lav.eng | 27.5 | 0.566 |
| newsdev2019-enlt-liteng.lit.eng | 27.8 | 0.557 |
| newstest2017-enlv-laveng.lav.eng | 21.1 | 0.512 |
| newstest2019-lten-liteng.lit.eng | 30.2 | 0.592 |
| Tatoeba-test.lav-eng.lav.eng | 51.5 | 0.687 |
| Tatoeba-test.lit-eng.lit.eng | 55.1 | 0.703 |
| Tatoeba-test.multi.eng | 50.6 | 0.662 |
| Tatoeba-test.prg-eng.prg.eng | 1.0 | 0.159 |
| Tatoeba-test.sgs-eng.sgs.eng | 16.5 | 0.265 |
### System Info:
- hf_name: bat-eng
- source_languages: bat
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bat-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['lt', 'lv', 'bat', 'en']
- src_constituents: {'lit', 'lav', 'prg_Latn', 'ltg', 'sgs'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bat-eng/opus2m-2020-07-31.test.txt
- src_alpha3: bat
- tgt_alpha3: eng
- short_pair: bat-en
- chrF2_score: 0.662
- bleu: 50.6
- brevity_penalty: 0.9890000000000001
- ref_len: 30772.0
- src_name: Baltic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: bat
- tgt_alpha2: en
- prefer_old: False
- long_pair: bat-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bcl-de | 2021-01-18T07:48:57.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-de
* source languages: bcl
* target languages: de
* OPUS readme: [bcl-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.de | 30.3 | 0.510 |
|
Helsinki-NLP/opus-mt-bcl-en | 2021-01-18T07:49:03.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-en
* source languages: bcl
* target languages: en
* OPUS readme: [bcl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-11.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.zip)
* test set translations: [opus-2020-02-11.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.test.txt)
* test set scores: [opus-2020-02-11.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-en/opus-2020-02-11.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.en | 56.1 | 0.697 |
|
Helsinki-NLP/opus-mt-bcl-es | 2021-01-18T07:49:09.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-es
* source languages: bcl
* target languages: es
* OPUS readme: [bcl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.es | 37.0 | 0.551 |
|
Helsinki-NLP/opus-mt-bcl-fi | 2021-01-18T07:49:15.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 60 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-fi
* source languages: bcl
* target languages: fi
* OPUS readme: [bcl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fi | 33.3 | 0.573 |
|
Helsinki-NLP/opus-mt-bcl-fr | 2021-01-18T07:49:21.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 43 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-fr
* source languages: bcl
* target languages: fr
* OPUS readme: [bcl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.fr | 35.0 | 0.527 |
|
Helsinki-NLP/opus-mt-bcl-sv | 2021-01-18T07:49:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"bcl",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 51 | transformers | ---
tags:
- translation
---
### opus-mt-bcl-sv
* source languages: bcl
* target languages: sv
* OPUS readme: [bcl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bcl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bcl-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bcl.sv | 38.0 | 0.565 |
|
Helsinki-NLP/opus-mt-be-es | 2021-01-18T07:49:34.000Z | [
"pytorch",
"marian",
"seq2seq",
"be",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 32 | transformers | ---
language:
- be
- es
tags:
- translation
license: apache-2.0
---
### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md)
* model: transformer-align
* source language(s): bel bel_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel.spa | 11.8 | 0.272 |
### System Info:
- hf_name: bel-spa
- source_languages: bel
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'es']
- src_constituents: {'bel', 'bel_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt
- src_alpha3: bel
- tgt_alpha3: spa
- short_pair: be-es
- chrF2_score: 0.272
- bleu: 11.8
- brevity_penalty: 0.892
- ref_len: 1412.0
- src_name: Belarusian
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: be
- tgt_alpha2: es
- prefer_old: False
- long_pair: bel-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bem-en | 2021-01-18T07:49:43.000Z | [
"pytorch",
"marian",
"seq2seq",
"bem",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 72 | transformers | ---
tags:
- translation
---
### opus-mt-bem-en
* source languages: bem
* target languages: en
* OPUS readme: [bem-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.en | 33.4 | 0.491 |
|
Helsinki-NLP/opus-mt-bem-es | 2021-01-18T07:49:49.000Z | [
"pytorch",
"marian",
"seq2seq",
"bem",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 54 | transformers | ---
tags:
- translation
---
### opus-mt-bem-es
* source languages: bem
* target languages: es
* OPUS readme: [bem-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.es | 22.8 | 0.403 |
|
Helsinki-NLP/opus-mt-bem-fi | 2021-01-18T07:49:55.000Z | [
"pytorch",
"marian",
"seq2seq",
"bem",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-bem-fi
* source languages: bem
* target languages: fi
* OPUS readme: [bem-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fi | 22.8 | 0.439 |
|
Helsinki-NLP/opus-mt-bem-fr | 2021-01-18T07:50:01.000Z | [
"pytorch",
"marian",
"seq2seq",
"bem",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-bem-fr
* source languages: bem
* target languages: fr
* OPUS readme: [bem-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.fr | 25.0 | 0.417 |
|
Helsinki-NLP/opus-mt-bem-sv | 2021-01-18T07:50:06.000Z | [
"pytorch",
"marian",
"seq2seq",
"bem",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 35 | transformers | ---
tags:
- translation
---
### opus-mt-bem-sv
* source languages: bem
* target languages: sv
* OPUS readme: [bem-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bem-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bem-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bem.sv | 25.6 | 0.434 |
|
Helsinki-NLP/opus-mt-ber-en | 2021-01-18T07:50:12.000Z | [
"pytorch",
"marian",
"seq2seq",
"ber",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 70 | transformers | ---
tags:
- translation
---
### opus-mt-ber-en
* source languages: ber
* target languages: en
* OPUS readme: [ber-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.en | 37.3 | 0.566 |
|
Helsinki-NLP/opus-mt-ber-es | 2021-01-18T07:50:16.000Z | [
"pytorch",
"marian",
"seq2seq",
"ber",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 47 | transformers | ---
tags:
- translation
---
### opus-mt-ber-es
* source languages: ber
* target languages: es
* OPUS readme: [ber-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.es | 33.8 | 0.487 |
|
Helsinki-NLP/opus-mt-ber-fr | 2021-01-18T07:50:20.000Z | [
"pytorch",
"marian",
"seq2seq",
"ber",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 43 | transformers | ---
tags:
- translation
---
### opus-mt-ber-fr
* source languages: ber
* target languages: fr
* OPUS readme: [ber-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ber-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ber-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ber.fr | 60.2 | 0.754 |
|
Helsinki-NLP/opus-mt-bg-de | 2021-01-18T07:50:26.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"de",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 29 | transformers | ---
language:
- bg
- de
tags:
- translation
license: apache-2.0
---
### bul-deu
* source group: Bulgarian
* target group: German
* OPUS readme: [bul-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md)
* model: transformer
* source language(s): bul
* target language(s): deu
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.deu | 49.3 | 0.676 |
### System Info:
- hf_name: bul-deu
- source_languages: bul
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'de']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-deu/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: deu
- short_pair: bg-de
- chrF2_score: 0.6759999999999999
- bleu: 49.3
- brevity_penalty: 1.0
- ref_len: 2218.0
- src_name: Bulgarian
- tgt_name: German
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: de
- prefer_old: False
- long_pair: bul-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-en | 2021-01-18T07:50:31.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 460 | transformers | ---
tags:
- translation
---
### opus-mt-bg-en
* source languages: bg
* target languages: en
* OPUS readme: [bg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.bg.en | 59.4 | 0.727 |
|
Helsinki-NLP/opus-mt-bg-eo | 2021-01-18T07:50:35.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"eo",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
language:
- bg
- eo
tags:
- translation
license: apache-2.0
---
### bul-epo
* source group: Bulgarian
* target group: Esperanto
* OPUS readme: [bul-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.epo | 24.5 | 0.438 |
### System Info:
- hf_name: bul-epo
- source_languages: bul
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'eo']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-epo/opus-2020-06-16.test.txt
- src_alpha3: bul
- tgt_alpha3: epo
- short_pair: bg-eo
- chrF2_score: 0.43799999999999994
- bleu: 24.5
- brevity_penalty: 0.9670000000000001
- ref_len: 4043.0
- src_name: Bulgarian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: bg
- tgt_alpha2: eo
- prefer_old: False
- long_pair: bul-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-es | 2021-01-18T07:50:42.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"es",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 62 | transformers | ---
language:
- bg
- es
tags:
- translation
license: apache-2.0
---
### bul-spa
* source group: Bulgarian
* target group: Spanish
* OPUS readme: [bul-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md)
* model: transformer
* source language(s): bul
* target language(s): spa
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.spa | 49.1 | 0.661 |
### System Info:
- hf_name: bul-spa
- source_languages: bul
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'es']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-spa/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: spa
- short_pair: bg-es
- chrF2_score: 0.6609999999999999
- bleu: 49.1
- brevity_penalty: 0.992
- ref_len: 1783.0
- src_name: Bulgarian
- tgt_name: Spanish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: es
- prefer_old: False
- long_pair: bul-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-fi | 2021-01-18T07:50:51.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 32 | transformers | ---
tags:
- translation
---
### opus-mt-bg-fi
* source languages: bg
* target languages: fi
* OPUS readme: [bg-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.fi | 23.7 | 0.505 |
|
Helsinki-NLP/opus-mt-bg-fr | 2021-01-18T07:50:58.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 70 | transformers | ---
language:
- bg
- fr
tags:
- translation
license: apache-2.0
---
### bul-fra
* source group: Bulgarian
* target group: French
* OPUS readme: [bul-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md)
* model: transformer
* source language(s): bul
* target language(s): fra
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.fra | 53.7 | 0.693 |
### System Info:
- hf_name: bul-fra
- source_languages: bul
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'fr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-fra/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: fra
- short_pair: bg-fr
- chrF2_score: 0.693
- bleu: 53.7
- brevity_penalty: 0.977
- ref_len: 3669.0
- src_name: Bulgarian
- tgt_name: French
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: fr
- prefer_old: False
- long_pair: bul-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-it | 2021-01-18T07:51:04.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"it",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
language:
- bg
- it
tags:
- translation
license: apache-2.0
---
### bul-ita
* source group: Bulgarian
* target group: Italian
* OPUS readme: [bul-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md)
* model: transformer
* source language(s): bul
* target language(s): ita
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ita | 43.1 | 0.653 |
### System Info:
- hf_name: bul-ita
- source_languages: bul
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'it']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ita/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: ita
- short_pair: bg-it
- chrF2_score: 0.653
- bleu: 43.1
- brevity_penalty: 0.987
- ref_len: 16951.0
- src_name: Bulgarian
- tgt_name: Italian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: it
- prefer_old: False
- long_pair: bul-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-ru | 2021-01-18T07:51:10.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"ru",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
language:
- bg
- ru
tags:
- translation
license: apache-2.0
---
### bul-rus
* source group: Bulgarian
* target group: Russian
* OPUS readme: [bul-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): rus
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.rus | 48.5 | 0.691 |
### System Info:
- hf_name: bul-rus
- source_languages: bul
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'ru']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-rus/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: rus
- short_pair: bg-ru
- chrF2_score: 0.691
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 7870.0
- src_name: Bulgarian
- tgt_name: Russian
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: ru
- prefer_old: False
- long_pair: bul-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-sv | 2021-01-18T07:51:16.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
tags:
- translation
---
### opus-mt-bg-sv
* source languages: bg
* target languages: sv
* OPUS readme: [bg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bg-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bg.sv | 29.1 | 0.494 |
|
Helsinki-NLP/opus-mt-bg-tr | 2021-01-18T07:51:22.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"tr",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
language:
- bg
- tr
tags:
- translation
license: apache-2.0
---
### bul-tur
* source group: Bulgarian
* target group: Turkish
* OPUS readme: [bul-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md)
* model: transformer
* source language(s): bul bul_Latn
* target language(s): tur
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip)
* test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt)
* test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.tur | 40.9 | 0.687 |
### System Info:
- hf_name: bul-tur
- source_languages: bul
- target_languages: tur
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-tur/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'tr']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'tur'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-tur/opus-2020-07-03.test.txt
- src_alpha3: bul
- tgt_alpha3: tur
- short_pair: bg-tr
- chrF2_score: 0.687
- bleu: 40.9
- brevity_penalty: 0.946
- ref_len: 4948.0
- src_name: Bulgarian
- tgt_name: Turkish
- train_date: 2020-07-03
- src_alpha2: bg
- tgt_alpha2: tr
- prefer_old: False
- long_pair: bul-tur
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bg-uk | 2021-01-18T07:51:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"bg",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 27 | transformers | ---
language:
- bg
- uk
tags:
- translation
license: apache-2.0
---
### bul-ukr
* source group: Bulgarian
* target group: Ukrainian
* OPUS readme: [bul-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md)
* model: transformer-align
* source language(s): bul
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bul.ukr | 49.2 | 0.683 |
### System Info:
- hf_name: bul-ukr
- source_languages: bul
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bul-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bg', 'uk']
- src_constituents: {'bul', 'bul_Latn'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bul-ukr/opus-2020-06-17.test.txt
- src_alpha3: bul
- tgt_alpha3: ukr
- short_pair: bg-uk
- chrF2_score: 0.6829999999999999
- bleu: 49.2
- brevity_penalty: 0.983
- ref_len: 4932.0
- src_name: Bulgarian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: bg
- tgt_alpha2: uk
- prefer_old: False
- long_pair: bul-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bi-en | 2021-01-18T07:51:33.000Z | [
"pytorch",
"marian",
"seq2seq",
"bi",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 45 | transformers | ---
tags:
- translation
---
### opus-mt-bi-en
* source languages: bi
* target languages: en
* OPUS readme: [bi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-en/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.en | 30.3 | 0.458 |
|
Helsinki-NLP/opus-mt-bi-es | 2021-01-18T07:51:39.000Z | [
"pytorch",
"marian",
"seq2seq",
"bi",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 51 | transformers | ---
tags:
- translation
---
### opus-mt-bi-es
* source languages: bi
* target languages: es
* OPUS readme: [bi-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-es/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.es | 21.1 | 0.388 |
|
Helsinki-NLP/opus-mt-bi-fr | 2021-01-18T07:51:43.000Z | [
"pytorch",
"marian",
"seq2seq",
"bi",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-bi-fr
* source languages: bi
* target languages: fr
* OPUS readme: [bi-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-fr/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.fr | 21.5 | 0.382 |
|
Helsinki-NLP/opus-mt-bi-sv | 2021-01-18T07:51:48.000Z | [
"pytorch",
"marian",
"seq2seq",
"bi",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
tags:
- translation
---
### opus-mt-bi-sv
* source languages: bi
* target languages: sv
* OPUS readme: [bi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bi-sv/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bi.sv | 22.7 | 0.403 |
|
Helsinki-NLP/opus-mt-bn-en | 2021-01-18T07:51:55.000Z | [
"pytorch",
"marian",
"seq2seq",
"bn",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 315 | transformers | ---
language:
- bn
- en
tags:
- translation
license: apache-2.0
---
### ben-eng
* source group: Bengali
* target group: English
* OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md)
* model: transformer-align
* source language(s): ben
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ben.eng | 49.7 | 0.641 |
### System Info:
- hf_name: ben-eng
- source_languages: ben
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['bn', 'en']
- src_constituents: {'ben'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt
- src_alpha3: ben
- tgt_alpha3: eng
- short_pair: bn-en
- chrF2_score: 0.6409999999999999
- bleu: 49.7
- brevity_penalty: 0.976
- ref_len: 13978.0
- src_name: Bengali
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: bn
- tgt_alpha2: en
- prefer_old: False
- long_pair: ben-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bnt-en | 2021-01-18T07:52:00.000Z | [
"pytorch",
"marian",
"seq2seq",
"sn",
"zu",
"rw",
"lg",
"ts",
"ln",
"ny",
"xh",
"rn",
"bnt",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 36 | transformers | ---
language:
- sn
- zu
- rw
- lg
- ts
- ln
- ny
- xh
- rn
- bnt
- en
tags:
- translation
license: apache-2.0
---
### bnt-eng
* source group: Bantu languages
* target group: English
* OPUS readme: [bnt-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md)
* model: transformer
* source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kin-eng.kin.eng | 31.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 8.3 | 0.271 |
| Tatoeba-test.lug-eng.lug.eng | 5.3 | 0.128 |
| Tatoeba-test.multi.eng | 23.1 | 0.394 |
| Tatoeba-test.nya-eng.nya.eng | 38.3 | 0.527 |
| Tatoeba-test.run-eng.run.eng | 26.6 | 0.431 |
| Tatoeba-test.sna-eng.sna.eng | 27.5 | 0.440 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.195 |
| Tatoeba-test.toi-eng.toi.eng | 16.2 | 0.342 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 8.4 | 0.231 |
| Tatoeba-test.xho-eng.xho.eng | 37.2 | 0.554 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.576 |
### System Info:
- hf_name: bnt-eng
- source_languages: bnt
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bnt-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'zu', 'rw', 'lg', 'ts', 'ln', 'ny', 'xh', 'rn', 'bnt', 'en']
- src_constituents: {'sna', 'zul', 'kin', 'lug', 'tso', 'lin', 'nya', 'xho', 'swh', 'run', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bnt-eng/opus2m-2020-07-31.test.txt
- src_alpha3: bnt
- tgt_alpha3: eng
- short_pair: bnt-en
- chrF2_score: 0.39399999999999996
- bleu: 23.1
- brevity_penalty: 1.0
- ref_len: 14565.0
- src_name: Bantu languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: bnt
- tgt_alpha2: en
- prefer_old: False
- long_pair: bnt-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-bzs-en | 2021-01-18T07:52:05.000Z | [
"pytorch",
"marian",
"seq2seq",
"bzs",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 60 | transformers | ---
tags:
- translation
---
### opus-mt-bzs-en
* source languages: bzs
* target languages: en
* OPUS readme: [bzs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.en | 44.5 | 0.605 |
|
Helsinki-NLP/opus-mt-bzs-es | 2021-01-18T07:52:16.000Z | [
"pytorch",
"marian",
"seq2seq",
"bzs",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 51 | transformers | ---
tags:
- translation
---
### opus-mt-bzs-es
* source languages: bzs
* target languages: es
* OPUS readme: [bzs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.es | 28.1 | 0.464 |
|
Helsinki-NLP/opus-mt-bzs-fi | 2021-01-18T07:52:23.000Z | [
"pytorch",
"marian",
"seq2seq",
"bzs",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 29 | transformers | ---
tags:
- translation
---
### opus-mt-bzs-fi
* source languages: bzs
* target languages: fi
* OPUS readme: [bzs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fi | 24.7 | 0.464 |
|
Helsinki-NLP/opus-mt-bzs-fr | 2021-01-18T07:52:28.000Z | [
"pytorch",
"marian",
"seq2seq",
"bzs",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-bzs-fr
* source languages: bzs
* target languages: fr
* OPUS readme: [bzs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.fr | 30.0 | 0.479 |
|
Helsinki-NLP/opus-mt-bzs-sv | 2021-01-18T07:52:35.000Z | [
"pytorch",
"marian",
"seq2seq",
"bzs",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 56 | transformers | ---
tags:
- translation
---
### opus-mt-bzs-sv
* source languages: bzs
* target languages: sv
* OPUS readme: [bzs-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/bzs-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/bzs-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.bzs.sv | 30.7 | 0.489 |
|
Helsinki-NLP/opus-mt-ca-de | 2021-01-18T07:52:44.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"de",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 77 | transformers | ---
language:
- ca
- de
tags:
- translation
license: apache-2.0
---
### cat-deu
* source group: Catalan
* target group: German
* OPUS readme: [cat-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.deu | 39.5 | 0.593 |
### System Info:
- hf_name: cat-deu
- source_languages: cat
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'de']
- src_constituents: {'cat'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-deu/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: deu
- short_pair: ca-de
- chrF2_score: 0.593
- bleu: 39.5
- brevity_penalty: 1.0
- ref_len: 5643.0
- src_name: Catalan
- tgt_name: German
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: de
- prefer_old: False
- long_pair: cat-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-en | 2021-01-18T07:52:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 721 | transformers | ---
tags:
- translation
---
### opus-mt-ca-en
* source languages: ca
* target languages: en
* OPUS readme: [ca-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ca.en | 51.4 | 0.678 |
|
Helsinki-NLP/opus-mt-ca-es | 2021-01-18T07:52:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 114 | transformers | ---
tags:
- translation
---
### opus-mt-ca-es
* source languages: ca
* target languages: es
* OPUS readme: [ca-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ca.es | 74.9 | 0.863 |
|
Helsinki-NLP/opus-mt-ca-fr | 2021-01-18T07:53:03.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"fr",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 63 | transformers | ---
language:
- ca
- fr
tags:
- translation
license: apache-2.0
---
### cat-fra
* source group: Catalan
* target group: French
* OPUS readme: [cat-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-fra/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.fra | 52.4 | 0.694 |
### System Info:
- hf_name: cat-fra
- source_languages: cat
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'fr']
- src_constituents: {'cat'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-fra/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: fra
- short_pair: ca-fr
- chrF2_score: 0.6940000000000001
- bleu: 52.4
- brevity_penalty: 0.987
- ref_len: 5517.0
- src_name: Catalan
- tgt_name: French
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: fr
- prefer_old: False
- long_pair: cat-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-it | 2021-01-18T07:53:08.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"it",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 43 | transformers | ---
language:
- ca
- it
tags:
- translation
license: apache-2.0
---
### cat-ita
* source group: Catalan
* target group: Italian
* OPUS readme: [cat-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.ita | 48.6 | 0.690 |
### System Info:
- hf_name: cat-ita
- source_languages: cat
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'it']
- src_constituents: {'cat'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ita/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: ita
- short_pair: ca-it
- chrF2_score: 0.69
- bleu: 48.6
- brevity_penalty: 0.985
- ref_len: 1995.0
- src_name: Catalan
- tgt_name: Italian
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: it
- prefer_old: False
- long_pair: cat-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-nl | 2021-01-18T07:53:12.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"nl",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 35 | transformers | ---
language:
- ca
- nl
tags:
- translation
license: apache-2.0
---
### cat-nld
* source group: Catalan
* target group: Dutch
* OPUS readme: [cat-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.nld | 45.1 | 0.632 |
### System Info:
- hf_name: cat-nld
- source_languages: cat
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'nl']
- src_constituents: {'cat'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-nld/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: nld
- short_pair: ca-nl
- chrF2_score: 0.632
- bleu: 45.1
- brevity_penalty: 0.965
- ref_len: 4157.0
- src_name: Catalan
- tgt_name: Dutch
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: nl
- prefer_old: False
- long_pair: cat-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-pt | 2021-01-18T07:53:17.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"pt",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 37 | transformers | ---
language:
- ca
- pt
tags:
- translation
license: apache-2.0
---
### cat-por
* source group: Catalan
* target group: Portuguese
* OPUS readme: [cat-por](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): por
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.por | 44.9 | 0.658 |
### System Info:
- hf_name: cat-por
- source_languages: cat
- target_languages: por
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-por/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'pt']
- src_constituents: {'cat'}
- tgt_constituents: {'por'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-por/opus-2020-06-17.test.txt
- src_alpha3: cat
- tgt_alpha3: por
- short_pair: ca-pt
- chrF2_score: 0.6579999999999999
- bleu: 44.9
- brevity_penalty: 0.953
- ref_len: 5847.0
- src_name: Catalan
- tgt_name: Portuguese
- train_date: 2020-06-17
- src_alpha2: ca
- tgt_alpha2: pt
- prefer_old: False
- long_pair: cat-por
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ca-uk | 2021-01-18T07:53:22.000Z | [
"pytorch",
"marian",
"seq2seq",
"ca",
"uk",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 34 | transformers | ---
language:
- ca
- uk
tags:
- translation
license: apache-2.0
---
### cat-ukr
* source group: Catalan
* target group: Ukrainian
* OPUS readme: [cat-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md)
* model: transformer-align
* source language(s): cat
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.cat.ukr | 28.6 | 0.503 |
### System Info:
- hf_name: cat-ukr
- source_languages: cat
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cat-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'uk']
- src_constituents: {'cat'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cat-ukr/opus-2020-06-16.test.txt
- src_alpha3: cat
- tgt_alpha3: ukr
- short_pair: ca-uk
- chrF2_score: 0.503
- bleu: 28.6
- brevity_penalty: 0.9670000000000001
- ref_len: 2438.0
- src_name: Catalan
- tgt_name: Ukrainian
- train_date: 2020-06-16
- src_alpha2: ca
- tgt_alpha2: uk
- prefer_old: False
- long_pair: cat-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cau-en | 2021-01-18T07:53:27.000Z | [
"pytorch",
"marian",
"seq2seq",
"ab",
"ka",
"ce",
"cau",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
language:
- ab
- ka
- ce
- cau
- en
tags:
- translation
license: apache-2.0
---
### cau-eng
* source group: Caucasian languages
* target group: English
* OPUS readme: [cau-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md)
* model: transformer
* source language(s): abk ady che kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.abk-eng.abk.eng | 0.3 | 0.134 |
| Tatoeba-test.ady-eng.ady.eng | 0.4 | 0.104 |
| Tatoeba-test.che-eng.che.eng | 0.6 | 0.128 |
| Tatoeba-test.kat-eng.kat.eng | 18.6 | 0.366 |
| Tatoeba-test.multi.eng | 16.6 | 0.351 |
### System Info:
- hf_name: cau-eng
- source_languages: cau
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cau-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ab', 'ka', 'ce', 'cau', 'en']
- src_constituents: {'abk', 'kat', 'che', 'ady'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cau-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cau
- tgt_alpha3: eng
- short_pair: cau-en
- chrF2_score: 0.35100000000000003
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 6285.0
- src_name: Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cau
- tgt_alpha2: en
- prefer_old: False
- long_pair: cau-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ccs-en | 2021-01-18T07:53:32.000Z | [
"pytorch",
"marian",
"seq2seq",
"ka",
"ccs",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 28 | transformers | ---
language:
- ka
- ccs
- en
tags:
- translation
license: apache-2.0
---
### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: [ccs-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md)
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat-eng.kat.eng | 18.0 | 0.357 |
| Tatoeba-test.multi.eng | 18.0 | 0.357 |
### System Info:
- hf_name: ccs-eng
- source_languages: ccs
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'ccs', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt
- src_alpha3: ccs
- tgt_alpha3: eng
- short_pair: ccs-en
- chrF2_score: 0.35700000000000004
- bleu: 18.0
- brevity_penalty: 1.0
- ref_len: 5992.0
- src_name: South Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: ccs
- tgt_alpha2: en
- prefer_old: False
- long_pair: ccs-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ceb-en | 2021-01-18T07:53:40.000Z | [
"pytorch",
"marian",
"seq2seq",
"ceb",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 274 | transformers | ---
language:
- ceb
- en
tags:
- translation
license: apache-2.0
---
### ceb-eng
* source group: Cebuano
* target group: English
* OPUS readme: [ceb-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md)
* model: transformer-align
* source language(s): ceb
* target language(s): eng
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ceb.eng | 21.5 | 0.387 |
### System Info:
- hf_name: ceb-eng
- source_languages: ceb
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ceb-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ceb', 'en']
- src_constituents: {'ceb'}
- tgt_constituents: {'eng'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ceb-eng/opus-2020-06-17.test.txt
- src_alpha3: ceb
- tgt_alpha3: eng
- short_pair: ceb-en
- chrF2_score: 0.387
- bleu: 21.5
- brevity_penalty: 1.0
- ref_len: 2293.0
- src_name: Cebuano
- tgt_name: English
- train_date: 2020-06-17
- src_alpha2: ceb
- tgt_alpha2: en
- prefer_old: False
- long_pair: ceb-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-ceb-es | 2021-01-18T07:53:46.000Z | [
"pytorch",
"marian",
"seq2seq",
"ceb",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 55 | transformers | ---
tags:
- translation
---
### opus-mt-ceb-es
* source languages: ceb
* target languages: es
* OPUS readme: [ceb-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.es | 31.6 | 0.508 |
|
Helsinki-NLP/opus-mt-ceb-fi | 2021-01-18T07:53:52.000Z | [
"pytorch",
"marian",
"seq2seq",
"ceb",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 40 | transformers | ---
tags:
- translation
---
### opus-mt-ceb-fi
* source languages: ceb
* target languages: fi
* OPUS readme: [ceb-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.fi | 27.4 | 0.525 |
|
Helsinki-NLP/opus-mt-ceb-fr | 2021-01-18T07:53:57.000Z | [
"pytorch",
"marian",
"seq2seq",
"ceb",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
tags:
- translation
---
### opus-mt-ceb-fr
* source languages: ceb
* target languages: fr
* OPUS readme: [ceb-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.fr | 30.0 | 0.491 |
|
Helsinki-NLP/opus-mt-ceb-sv | 2021-01-18T07:54:02.000Z | [
"pytorch",
"marian",
"seq2seq",
"ceb",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
tags:
- translation
---
### opus-mt-ceb-sv
* source languages: ceb
* target languages: sv
* OPUS readme: [ceb-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ceb-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ceb-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ceb.sv | 35.5 | 0.552 |
|
Helsinki-NLP/opus-mt-cel-en | 2021-01-18T07:54:08.000Z | [
"pytorch",
"marian",
"seq2seq",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 41 | transformers | ---
language:
- gd
- ga
- br
- kw
- gv
- cy
- cel
- en
tags:
- translation
license: apache-2.0
---
### cel-eng
* source group: Celtic languages
* target group: English
* OPUS readme: [cel-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md)
* model: transformer
* source language(s): bre cor cym gla gle glv
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bre-eng.bre.eng | 17.2 | 0.385 |
| Tatoeba-test.cor-eng.cor.eng | 3.0 | 0.172 |
| Tatoeba-test.cym-eng.cym.eng | 41.5 | 0.582 |
| Tatoeba-test.gla-eng.gla.eng | 15.4 | 0.330 |
| Tatoeba-test.gle-eng.gle.eng | 50.8 | 0.668 |
| Tatoeba-test.glv-eng.glv.eng | 11.0 | 0.297 |
| Tatoeba-test.multi.eng | 22.8 | 0.398 |
### System Info:
- hf_name: cel-eng
- source_languages: cel
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cel-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['gd', 'ga', 'br', 'kw', 'gv', 'cy', 'cel', 'en']
- src_constituents: {'gla', 'gle', 'bre', 'cor', 'glv', 'cym'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cel-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cel
- tgt_alpha3: eng
- short_pair: cel-en
- chrF2_score: 0.39799999999999996
- bleu: 22.8
- brevity_penalty: 1.0
- ref_len: 42097.0
- src_name: Celtic languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cel
- tgt_alpha2: en
- prefer_old: False
- long_pair: cel-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-chk-en | 2021-01-18T07:54:13.000Z | [
"pytorch",
"marian",
"seq2seq",
"chk",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-chk-en
* source languages: chk
* target languages: en
* OPUS readme: [chk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.en | 31.2 | 0.465 |
|
Helsinki-NLP/opus-mt-chk-es | 2021-01-18T07:54:19.000Z | [
"pytorch",
"marian",
"seq2seq",
"chk",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 57 | transformers | ---
tags:
- translation
---
### opus-mt-chk-es
* source languages: chk
* target languages: es
* OPUS readme: [chk-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.es | 20.8 | 0.374 |
|
Helsinki-NLP/opus-mt-chk-fr | 2021-01-18T07:54:24.000Z | [
"pytorch",
"marian",
"seq2seq",
"chk",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 53 | transformers | ---
tags:
- translation
---
### opus-mt-chk-fr
* source languages: chk
* target languages: fr
* OPUS readme: [chk-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.fr | 22.4 | 0.387 |
|
Helsinki-NLP/opus-mt-chk-sv | 2021-01-18T07:54:30.000Z | [
"pytorch",
"marian",
"seq2seq",
"chk",
"sv",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 49 | transformers | ---
tags:
- translation
---
### opus-mt-chk-sv
* source languages: chk
* target languages: sv
* OPUS readme: [chk-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/chk-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/chk-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.chk.sv | 23.6 | 0.406 |
|
Helsinki-NLP/opus-mt-cpf-en | 2021-01-18T07:54:35.000Z | [
"pytorch",
"marian",
"seq2seq",
"ht",
"cpf",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 31 | transformers | ---
language:
- ht
- cpf
- en
tags:
- translation
license: apache-2.0
---
### cpf-eng
* source group: Creoles and pidgins, French‑based
* target group: English
* OPUS readme: [cpf-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md)
* model: transformer
* source language(s): gcf_Latn hat mfe
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.gcf-eng.gcf.eng | 8.4 | 0.229 |
| Tatoeba-test.hat-eng.hat.eng | 28.0 | 0.421 |
| Tatoeba-test.mfe-eng.mfe.eng | 66.0 | 0.808 |
| Tatoeba-test.multi.eng | 16.3 | 0.323 |
### System Info:
- hf_name: cpf-eng
- source_languages: cpf
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpf-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ht', 'cpf', 'en']
- src_constituents: {'gcf_Latn', 'hat', 'mfe'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpf-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpf
- tgt_alpha3: eng
- short_pair: cpf-en
- chrF2_score: 0.32299999999999995
- bleu: 16.3
- brevity_penalty: 1.0
- ref_len: 990.0
- src_name: Creoles and pidgins, French‑based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpf
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpf-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cpp-cpp | 2021-01-18T07:54:40.000Z | [
"pytorch",
"marian",
"seq2seq",
"id",
"cpp",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 42 | transformers | ---
language:
- id
- cpp
tags:
- translation
license: apache-2.0
---
### cpp-cpp
* source group: Creoles and pidgins, Portuguese-based
* target group: Creoles and pidgins, Portuguese-based
* OPUS readme: [cpp-cpp](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md)
* model: transformer
* source language(s): ind pap
* target language(s): ind pap
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip)
* test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt)
* test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa-msa.msa.msa | 0.7 | 0.149 |
| Tatoeba-test.msa-pap.msa.pap | 31.7 | 0.577 |
| Tatoeba-test.multi.multi | 21.1 | 0.369 |
| Tatoeba-test.pap-msa.pap.msa | 17.7 | 0.197 |
### System Info:
- hf_name: cpp-cpp
- source_languages: cpp
- target_languages: cpp
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-cpp/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['id', 'cpp']
- src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- tgt_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-cpp/opus-2020-07-26.test.txt
- src_alpha3: cpp
- tgt_alpha3: cpp
- short_pair: cpp-cpp
- chrF2_score: 0.369
- bleu: 21.1
- brevity_penalty: 0.882
- ref_len: 18.0
- src_name: Creoles and pidgins, Portuguese-based
- tgt_name: Creoles and pidgins, Portuguese-based
- train_date: 2020-07-26
- src_alpha2: cpp
- tgt_alpha2: cpp
- prefer_old: False
- long_pair: cpp-cpp
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-cpp-en | 2021-01-18T07:54:45.000Z | [
"pytorch",
"marian",
"seq2seq",
"id",
"cpp",
"en",
"transformers",
"translation",
"license:apache-2.0",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"metadata.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 48 | transformers | ---
language:
- id
- cpp
- en
tags:
- translation
license: apache-2.0
---
### cpp-eng
* source group: Creoles and pidgins, Portuguese-based
* target group: English
* OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md)
* model: transformer
* source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 |
| Tatoeba-test.multi.eng | 39.7 | 0.580 |
| Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 |
### System Info:
- hf_name: cpp-eng
- source_languages: cpp
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['id', 'cpp', 'en']
- src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt
- src_alpha3: cpp
- tgt_alpha3: eng
- short_pair: cpp-en
- chrF2_score: 0.58
- bleu: 39.7
- brevity_penalty: 0.972
- ref_len: 37399.0
- src_name: Creoles and pidgins, Portuguese-based
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: cpp
- tgt_alpha2: en
- prefer_old: False
- long_pair: cpp-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 |
Helsinki-NLP/opus-mt-crs-de | 2021-01-18T07:54:50.000Z | [
"pytorch",
"marian",
"seq2seq",
"crs",
"de",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 44 | transformers | ---
tags:
- translation
---
### opus-mt-crs-de
* source languages: crs
* target languages: de
* OPUS readme: [crs-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.de | 20.4 | 0.397 |
|
Helsinki-NLP/opus-mt-crs-en | 2021-01-18T07:54:56.000Z | [
"pytorch",
"marian",
"seq2seq",
"crs",
"en",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-crs-en
* source languages: crs
* target languages: en
* OPUS readme: [crs-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.en | 42.9 | 0.589 |
|
Helsinki-NLP/opus-mt-crs-es | 2021-01-18T07:55:05.000Z | [
"pytorch",
"marian",
"seq2seq",
"crs",
"es",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 47 | transformers | ---
tags:
- translation
---
### opus-mt-crs-es
* source languages: crs
* target languages: es
* OPUS readme: [crs-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.zip)
* test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.test.txt)
* test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-es/opus-2020-01-15.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.es | 26.1 | 0.445 |
|
Helsinki-NLP/opus-mt-crs-fi | 2021-01-18T07:55:11.000Z | [
"pytorch",
"marian",
"seq2seq",
"crs",
"fi",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 64 | transformers | ---
tags:
- translation
---
### opus-mt-crs-fi
* source languages: crs
* target languages: fi
* OPUS readme: [crs-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fi/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.fi | 25.6 | 0.479 |
|
Helsinki-NLP/opus-mt-crs-fr | 2021-01-18T07:55:17.000Z | [
"pytorch",
"marian",
"seq2seq",
"crs",
"fr",
"transformers",
"translation",
"text2text-generation"
] | translation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"source.spm",
"target.spm",
"tokenizer_config.json",
"vocab.json"
] | Helsinki-NLP | 52 | transformers | ---
tags:
- translation
---
### opus-mt-crs-fr
* source languages: crs
* target languages: fr
* OPUS readme: [crs-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/crs-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/crs-fr/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.crs.fr | 29.4 | 0.475 |
|