modelId stringlengths 4 81 | tags list | pipeline_tag stringclasses 17 values | config dict | downloads int64 0 59.7M | first_commit timestamp[ns, tz=UTC] | card stringlengths 51 438k | embedding list |
|---|---|---|---|---|---|---|---|
Declan/Independent__model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-04-29T13:23:54Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-mt-sv
* source languages: mt
* target languages: sv
* OPUS readme: [mt-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mt-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mt-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.mt.sv | 30.4 | 0.514 |
| [
-0.015406040474772453,
-0.027176963165402412,
-0.0025865084026008844,
0.03915533050894737,
0.029139559715986252,
0.020494289696216583,
-0.0005059564136900008,
-0.0027681696228682995,
-0.04716123640537262,
0.05701708421111107,
0.010473106056451797,
-0.01414462924003601,
0.015011418610811234,
... |
Declan/NPR_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2020-08-19T00:30:48Z | ---
language:
- ca
- es
- os
- eo
- ro
- fy
- cy
- is
- lb
- su
- an
- sq
- fr
- ht
- rm
- cv
- ig
- am
- eu
- tr
- ps
- af
- ny
- ch
- uk
- sl
- lt
- tk
- sg
- ar
- lg
- bg
- be
- ka
- gd
- ja
- si
- br
- mh
- km
- th
- ty
- rw
- te
- mk
- or
- wo
- kl
- mr
- ru
- yo
- hu
- fo
- zh
- ti
- co
- ee
- oc
- sn
- mt
- ts
- pl
- gl
- nb
- bn
- tt
- bo
- lo
- id
- gn
- nv
- hy
- kn
- to
- io
- so
- vi
- da
- fj
- gv
- sm
- nl
- mi
- pt
- hi
- se
- as
- ta
- et
- kw
- ga
- sv
- ln
- na
- mn
- gu
- wa
- lv
- jv
- el
- my
- ba
- it
- hr
- ur
- ce
- nn
- fi
- mg
- rn
- xh
- ab
- de
- cs
- he
- zu
- yi
- ml
- mul
- en
tags:
- translation
license: apache-2.0
---
### mul-eng
* source group: Multiple languages
* target group: English
* OPUS readme: [mul-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md)
* model: transformer
* source language(s): abk acm ady afb afh_Latn afr akl_Latn aln amh ang_Latn apc ara arg arq ary arz asm ast avk_Latn awa aze_Latn bak bam_Latn bel bel_Latn ben bho bod bos_Latn bre brx brx_Latn bul bul_Latn cat ceb ces cha che chr chv cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant cor cos crh crh_Latn csb_Latn cym dan deu dsb dtp dws_Latn egl ell enm_Latn epo est eus ewe ext fao fij fin fkv_Latn fra frm_Latn frr fry fuc fuv gan gcf_Latn gil gla gle glg glv gom gos got_Goth grc_Grek grn gsw guj hat hau_Latn haw heb hif_Latn hil hin hnj_Latn hoc hoc_Latn hrv hsb hun hye iba ibo ido ido_Latn ike_Latn ile_Latn ilo ina_Latn ind isl ita izh jav jav_Java jbo jbo_Cyrl jbo_Latn jdt_Cyrl jpn kab kal kan kat kaz_Cyrl kaz_Latn kek_Latn kha khm khm_Latn kin kir_Cyrl kjh kpv krl ksh kum kur_Arab kur_Latn lad lad_Latn lao lat_Latn lav ldn_Latn lfn_Cyrl lfn_Latn lij lin lit liv_Latn lkt lld_Latn lmo ltg ltz lug lzh lzh_Hans mad mah mai mal mar max_Latn mdf mfe mhr mic min mkd mlg mlt mnw moh mon mri mwl mww mya myv nan nau nav nds niu nld nno nob nob_Hebr nog non_Latn nov_Latn npi nya oci ori orv_Cyrl oss ota_Arab ota_Latn pag pan_Guru pap pau pdc pes pes_Latn pes_Thaa pms pnb pol por ppl_Latn prg_Latn pus quc qya qya_Latn rap rif_Latn roh rom ron rue run rus sag sah san_Deva scn sco sgs shs_Latn shy_Latn sin sjn_Latn slv sma sme smo sna snd_Arab som spa sqi srp_Cyrl srp_Latn stq sun swe swg swh tah tam tat tat_Arab tat_Latn tel tet tgk_Cyrl tha tir tlh_Latn tly_Latn tmw_Latn toi_Latn ton tpw_Latn tso tuk tuk_Latn tur tvl tyv tzl tzl_Latn udm uig_Arab uig_Cyrl ukr umb urd uzb_Cyrl uzb_Latn vec vie vie_Hani vol_Latn vro war wln wol wuu xal xho yid yor yue yue_Hans yue_Hant zho zho_Hans zho_Hant zlm_Latn zsm_Latn zul zza
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2014-hineng.hin.eng | 8.5 | 0.341 |
| newsdev2015-enfi-fineng.fin.eng | 16.8 | 0.441 |
| newsdev2016-enro-roneng.ron.eng | 31.3 | 0.580 |
| newsdev2016-entr-tureng.tur.eng | 16.4 | 0.422 |
| newsdev2017-enlv-laveng.lav.eng | 21.3 | 0.502 |
| newsdev2017-enzh-zhoeng.zho.eng | 12.7 | 0.409 |
| newsdev2018-enet-esteng.est.eng | 19.8 | 0.467 |
| newsdev2019-engu-gujeng.guj.eng | 13.3 | 0.385 |
| newsdev2019-enlt-liteng.lit.eng | 19.9 | 0.482 |
| newsdiscussdev2015-enfr-fraeng.fra.eng | 26.7 | 0.520 |
| newsdiscusstest2015-enfr-fraeng.fra.eng | 29.8 | 0.541 |
| newssyscomb2009-ceseng.ces.eng | 21.1 | 0.487 |
| newssyscomb2009-deueng.deu.eng | 22.6 | 0.499 |
| newssyscomb2009-fraeng.fra.eng | 25.8 | 0.530 |
| newssyscomb2009-huneng.hun.eng | 15.1 | 0.430 |
| newssyscomb2009-itaeng.ita.eng | 29.4 | 0.555 |
| newssyscomb2009-spaeng.spa.eng | 26.1 | 0.534 |
| news-test2008-deueng.deu.eng | 21.6 | 0.491 |
| news-test2008-fraeng.fra.eng | 22.3 | 0.502 |
| news-test2008-spaeng.spa.eng | 23.6 | 0.514 |
| newstest2009-ceseng.ces.eng | 19.8 | 0.480 |
| newstest2009-deueng.deu.eng | 20.9 | 0.487 |
| newstest2009-fraeng.fra.eng | 25.0 | 0.523 |
| newstest2009-huneng.hun.eng | 14.7 | 0.425 |
| newstest2009-itaeng.ita.eng | 27.6 | 0.542 |
| newstest2009-spaeng.spa.eng | 25.7 | 0.530 |
| newstest2010-ceseng.ces.eng | 20.6 | 0.491 |
| newstest2010-deueng.deu.eng | 23.4 | 0.517 |
| newstest2010-fraeng.fra.eng | 26.1 | 0.537 |
| newstest2010-spaeng.spa.eng | 29.1 | 0.561 |
| newstest2011-ceseng.ces.eng | 21.0 | 0.489 |
| newstest2011-deueng.deu.eng | 21.3 | 0.494 |
| newstest2011-fraeng.fra.eng | 26.8 | 0.546 |
| newstest2011-spaeng.spa.eng | 28.2 | 0.549 |
| newstest2012-ceseng.ces.eng | 20.5 | 0.485 |
| newstest2012-deueng.deu.eng | 22.3 | 0.503 |
| newstest2012-fraeng.fra.eng | 27.5 | 0.545 |
| newstest2012-ruseng.rus.eng | 26.6 | 0.532 |
| newstest2012-spaeng.spa.eng | 30.3 | 0.567 |
| newstest2013-ceseng.ces.eng | 22.5 | 0.498 |
| newstest2013-deueng.deu.eng | 25.0 | 0.518 |
| newstest2013-fraeng.fra.eng | 27.4 | 0.537 |
| newstest2013-ruseng.rus.eng | 21.6 | 0.484 |
| newstest2013-spaeng.spa.eng | 28.4 | 0.555 |
| newstest2014-csen-ceseng.ces.eng | 24.0 | 0.517 |
| newstest2014-deen-deueng.deu.eng | 24.1 | 0.511 |
| newstest2014-fren-fraeng.fra.eng | 29.1 | 0.563 |
| newstest2014-hien-hineng.hin.eng | 14.0 | 0.414 |
| newstest2014-ruen-ruseng.rus.eng | 24.0 | 0.521 |
| newstest2015-encs-ceseng.ces.eng | 21.9 | 0.481 |
| newstest2015-ende-deueng.deu.eng | 25.5 | 0.519 |
| newstest2015-enfi-fineng.fin.eng | 17.4 | 0.441 |
| newstest2015-enru-ruseng.rus.eng | 22.4 | 0.494 |
| newstest2016-encs-ceseng.ces.eng | 23.0 | 0.500 |
| newstest2016-ende-deueng.deu.eng | 30.1 | 0.560 |
| newstest2016-enfi-fineng.fin.eng | 18.5 | 0.461 |
| newstest2016-enro-roneng.ron.eng | 29.6 | 0.562 |
| newstest2016-enru-ruseng.rus.eng | 22.0 | 0.495 |
| newstest2016-entr-tureng.tur.eng | 14.8 | 0.415 |
| newstest2017-encs-ceseng.ces.eng | 20.2 | 0.475 |
| newstest2017-ende-deueng.deu.eng | 26.0 | 0.523 |
| newstest2017-enfi-fineng.fin.eng | 19.6 | 0.465 |
| newstest2017-enlv-laveng.lav.eng | 16.2 | 0.454 |
| newstest2017-enru-ruseng.rus.eng | 24.2 | 0.510 |
| newstest2017-entr-tureng.tur.eng | 15.0 | 0.412 |
| newstest2017-enzh-zhoeng.zho.eng | 13.7 | 0.412 |
| newstest2018-encs-ceseng.ces.eng | 21.2 | 0.486 |
| newstest2018-ende-deueng.deu.eng | 31.5 | 0.564 |
| newstest2018-enet-esteng.est.eng | 19.7 | 0.473 |
| newstest2018-enfi-fineng.fin.eng | 15.1 | 0.418 |
| newstest2018-enru-ruseng.rus.eng | 21.3 | 0.490 |
| newstest2018-entr-tureng.tur.eng | 15.4 | 0.421 |
| newstest2018-enzh-zhoeng.zho.eng | 12.9 | 0.408 |
| newstest2019-deen-deueng.deu.eng | 27.0 | 0.529 |
| newstest2019-fien-fineng.fin.eng | 17.2 | 0.438 |
| newstest2019-guen-gujeng.guj.eng | 9.0 | 0.342 |
| newstest2019-lten-liteng.lit.eng | 22.6 | 0.512 |
| newstest2019-ruen-ruseng.rus.eng | 24.1 | 0.503 |
| newstest2019-zhen-zhoeng.zho.eng | 13.9 | 0.427 |
| newstestB2016-enfi-fineng.fin.eng | 15.2 | 0.428 |
| newstestB2017-enfi-fineng.fin.eng | 16.8 | 0.442 |
| newstestB2017-fien-fineng.fin.eng | 16.8 | 0.442 |
| Tatoeba-test.abk-eng.abk.eng | 2.4 | 0.190 |
| Tatoeba-test.ady-eng.ady.eng | 1.1 | 0.111 |
| Tatoeba-test.afh-eng.afh.eng | 1.7 | 0.108 |
| Tatoeba-test.afr-eng.afr.eng | 53.0 | 0.672 |
| Tatoeba-test.akl-eng.akl.eng | 5.9 | 0.239 |
| Tatoeba-test.amh-eng.amh.eng | 25.6 | 0.464 |
| Tatoeba-test.ang-eng.ang.eng | 11.7 | 0.289 |
| Tatoeba-test.ara-eng.ara.eng | 26.4 | 0.443 |
| Tatoeba-test.arg-eng.arg.eng | 35.9 | 0.473 |
| Tatoeba-test.asm-eng.asm.eng | 19.8 | 0.365 |
| Tatoeba-test.ast-eng.ast.eng | 31.8 | 0.467 |
| Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.119 |
| Tatoeba-test.awa-eng.awa.eng | 9.7 | 0.271 |
| Tatoeba-test.aze-eng.aze.eng | 37.0 | 0.542 |
| Tatoeba-test.bak-eng.bak.eng | 13.9 | 0.395 |
| Tatoeba-test.bam-eng.bam.eng | 2.2 | 0.094 |
| Tatoeba-test.bel-eng.bel.eng | 36.8 | 0.549 |
| Tatoeba-test.ben-eng.ben.eng | 39.7 | 0.546 |
| Tatoeba-test.bho-eng.bho.eng | 33.6 | 0.540 |
| Tatoeba-test.bod-eng.bod.eng | 1.1 | 0.147 |
| Tatoeba-test.bre-eng.bre.eng | 14.2 | 0.303 |
| Tatoeba-test.brx-eng.brx.eng | 1.7 | 0.130 |
| Tatoeba-test.bul-eng.bul.eng | 46.0 | 0.621 |
| Tatoeba-test.cat-eng.cat.eng | 46.6 | 0.636 |
| Tatoeba-test.ceb-eng.ceb.eng | 17.4 | 0.347 |
| Tatoeba-test.ces-eng.ces.eng | 41.3 | 0.586 |
| Tatoeba-test.cha-eng.cha.eng | 7.9 | 0.232 |
| Tatoeba-test.che-eng.che.eng | 0.7 | 0.104 |
| Tatoeba-test.chm-eng.chm.eng | 7.3 | 0.261 |
| Tatoeba-test.chr-eng.chr.eng | 8.8 | 0.244 |
| Tatoeba-test.chv-eng.chv.eng | 11.0 | 0.319 |
| Tatoeba-test.cor-eng.cor.eng | 5.4 | 0.204 |
| Tatoeba-test.cos-eng.cos.eng | 58.2 | 0.643 |
| Tatoeba-test.crh-eng.crh.eng | 26.3 | 0.399 |
| Tatoeba-test.csb-eng.csb.eng | 18.8 | 0.389 |
| Tatoeba-test.cym-eng.cym.eng | 23.4 | 0.407 |
| Tatoeba-test.dan-eng.dan.eng | 50.5 | 0.659 |
| Tatoeba-test.deu-eng.deu.eng | 39.6 | 0.579 |
| Tatoeba-test.dsb-eng.dsb.eng | 24.3 | 0.449 |
| Tatoeba-test.dtp-eng.dtp.eng | 1.0 | 0.149 |
| Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.061 |
| Tatoeba-test.egl-eng.egl.eng | 7.6 | 0.236 |
| Tatoeba-test.ell-eng.ell.eng | 55.4 | 0.682 |
| Tatoeba-test.enm-eng.enm.eng | 28.0 | 0.489 |
| Tatoeba-test.epo-eng.epo.eng | 41.8 | 0.591 |
| Tatoeba-test.est-eng.est.eng | 41.5 | 0.581 |
| Tatoeba-test.eus-eng.eus.eng | 37.8 | 0.557 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.7 | 0.262 |
| Tatoeba-test.ext-eng.ext.eng | 25.5 | 0.405 |
| Tatoeba-test.fao-eng.fao.eng | 28.7 | 0.469 |
| Tatoeba-test.fas-eng.fas.eng | 7.5 | 0.281 |
| Tatoeba-test.fij-eng.fij.eng | 24.2 | 0.320 |
| Tatoeba-test.fin-eng.fin.eng | 35.8 | 0.534 |
| Tatoeba-test.fkv-eng.fkv.eng | 15.5 | 0.434 |
| Tatoeba-test.fra-eng.fra.eng | 45.1 | 0.618 |
| Tatoeba-test.frm-eng.frm.eng | 29.6 | 0.427 |
| Tatoeba-test.frr-eng.frr.eng | 5.5 | 0.138 |
| Tatoeba-test.fry-eng.fry.eng | 25.3 | 0.455 |
| Tatoeba-test.ful-eng.ful.eng | 1.1 | 0.127 |
| Tatoeba-test.gcf-eng.gcf.eng | 16.0 | 0.315 |
| Tatoeba-test.gil-eng.gil.eng | 46.7 | 0.587 |
| Tatoeba-test.gla-eng.gla.eng | 20.2 | 0.358 |
| Tatoeba-test.gle-eng.gle.eng | 43.9 | 0.592 |
| Tatoeba-test.glg-eng.glg.eng | 45.1 | 0.623 |
| Tatoeba-test.glv-eng.glv.eng | 3.3 | 0.119 |
| Tatoeba-test.gos-eng.gos.eng | 20.1 | 0.364 |
| Tatoeba-test.got-eng.got.eng | 0.1 | 0.041 |
| Tatoeba-test.grc-eng.grc.eng | 2.1 | 0.137 |
| Tatoeba-test.grn-eng.grn.eng | 1.7 | 0.152 |
| Tatoeba-test.gsw-eng.gsw.eng | 18.2 | 0.334 |
| Tatoeba-test.guj-eng.guj.eng | 21.7 | 0.373 |
| Tatoeba-test.hat-eng.hat.eng | 34.5 | 0.502 |
| Tatoeba-test.hau-eng.hau.eng | 10.5 | 0.295 |
| Tatoeba-test.haw-eng.haw.eng | 2.8 | 0.160 |
| Tatoeba-test.hbs-eng.hbs.eng | 46.7 | 0.623 |
| Tatoeba-test.heb-eng.heb.eng | 33.0 | 0.492 |
| Tatoeba-test.hif-eng.hif.eng | 17.0 | 0.391 |
| Tatoeba-test.hil-eng.hil.eng | 16.0 | 0.339 |
| Tatoeba-test.hin-eng.hin.eng | 36.4 | 0.533 |
| Tatoeba-test.hmn-eng.hmn.eng | 0.4 | 0.131 |
| Tatoeba-test.hoc-eng.hoc.eng | 0.7 | 0.132 |
| Tatoeba-test.hsb-eng.hsb.eng | 41.9 | 0.551 |
| Tatoeba-test.hun-eng.hun.eng | 33.2 | 0.510 |
| Tatoeba-test.hye-eng.hye.eng | 32.2 | 0.487 |
| Tatoeba-test.iba-eng.iba.eng | 9.4 | 0.278 |
| Tatoeba-test.ibo-eng.ibo.eng | 5.8 | 0.200 |
| Tatoeba-test.ido-eng.ido.eng | 31.7 | 0.503 |
| Tatoeba-test.iku-eng.iku.eng | 9.1 | 0.164 |
| Tatoeba-test.ile-eng.ile.eng | 42.2 | 0.595 |
| Tatoeba-test.ilo-eng.ilo.eng | 29.7 | 0.485 |
| Tatoeba-test.ina-eng.ina.eng | 42.1 | 0.607 |
| Tatoeba-test.isl-eng.isl.eng | 35.7 | 0.527 |
| Tatoeba-test.ita-eng.ita.eng | 54.8 | 0.686 |
| Tatoeba-test.izh-eng.izh.eng | 28.3 | 0.526 |
| Tatoeba-test.jav-eng.jav.eng | 10.0 | 0.282 |
| Tatoeba-test.jbo-eng.jbo.eng | 0.3 | 0.115 |
| Tatoeba-test.jdt-eng.jdt.eng | 5.3 | 0.140 |
| Tatoeba-test.jpn-eng.jpn.eng | 18.8 | 0.387 |
| Tatoeba-test.kab-eng.kab.eng | 3.9 | 0.205 |
| Tatoeba-test.kal-eng.kal.eng | 16.9 | 0.329 |
| Tatoeba-test.kan-eng.kan.eng | 16.2 | 0.374 |
| Tatoeba-test.kat-eng.kat.eng | 31.1 | 0.493 |
| Tatoeba-test.kaz-eng.kaz.eng | 24.5 | 0.437 |
| Tatoeba-test.kek-eng.kek.eng | 7.4 | 0.192 |
| Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.154 |
| Tatoeba-test.khm-eng.khm.eng | 12.2 | 0.290 |
| Tatoeba-test.kin-eng.kin.eng | 22.5 | 0.355 |
| Tatoeba-test.kir-eng.kir.eng | 27.2 | 0.470 |
| Tatoeba-test.kjh-eng.kjh.eng | 2.1 | 0.129 |
| Tatoeba-test.kok-eng.kok.eng | 4.5 | 0.259 |
| Tatoeba-test.kom-eng.kom.eng | 1.4 | 0.099 |
| Tatoeba-test.krl-eng.krl.eng | 26.1 | 0.387 |
| Tatoeba-test.ksh-eng.ksh.eng | 5.5 | 0.256 |
| Tatoeba-test.kum-eng.kum.eng | 9.3 | 0.288 |
| Tatoeba-test.kur-eng.kur.eng | 9.6 | 0.208 |
| Tatoeba-test.lad-eng.lad.eng | 30.1 | 0.475 |
| Tatoeba-test.lah-eng.lah.eng | 11.6 | 0.284 |
| Tatoeba-test.lao-eng.lao.eng | 4.5 | 0.214 |
| Tatoeba-test.lat-eng.lat.eng | 21.5 | 0.402 |
| Tatoeba-test.lav-eng.lav.eng | 40.2 | 0.577 |
| Tatoeba-test.ldn-eng.ldn.eng | 0.8 | 0.115 |
| Tatoeba-test.lfn-eng.lfn.eng | 23.0 | 0.433 |
| Tatoeba-test.lij-eng.lij.eng | 9.3 | 0.287 |
| Tatoeba-test.lin-eng.lin.eng | 2.4 | 0.196 |
| Tatoeba-test.lit-eng.lit.eng | 44.0 | 0.597 |
| Tatoeba-test.liv-eng.liv.eng | 1.6 | 0.115 |
| Tatoeba-test.lkt-eng.lkt.eng | 2.0 | 0.113 |
| Tatoeba-test.lld-eng.lld.eng | 18.3 | 0.312 |
| Tatoeba-test.lmo-eng.lmo.eng | 25.4 | 0.395 |
| Tatoeba-test.ltz-eng.ltz.eng | 35.9 | 0.509 |
| Tatoeba-test.lug-eng.lug.eng | 5.1 | 0.357 |
| Tatoeba-test.mad-eng.mad.eng | 2.8 | 0.123 |
| Tatoeba-test.mah-eng.mah.eng | 5.7 | 0.175 |
| Tatoeba-test.mai-eng.mai.eng | 56.3 | 0.703 |
| Tatoeba-test.mal-eng.mal.eng | 37.5 | 0.534 |
| Tatoeba-test.mar-eng.mar.eng | 22.8 | 0.470 |
| Tatoeba-test.mdf-eng.mdf.eng | 2.0 | 0.110 |
| Tatoeba-test.mfe-eng.mfe.eng | 59.2 | 0.764 |
| Tatoeba-test.mic-eng.mic.eng | 9.0 | 0.199 |
| Tatoeba-test.mkd-eng.mkd.eng | 44.3 | 0.593 |
| Tatoeba-test.mlg-eng.mlg.eng | 31.9 | 0.424 |
| Tatoeba-test.mlt-eng.mlt.eng | 38.6 | 0.540 |
| Tatoeba-test.mnw-eng.mnw.eng | 2.5 | 0.101 |
| Tatoeba-test.moh-eng.moh.eng | 0.3 | 0.110 |
| Tatoeba-test.mon-eng.mon.eng | 13.5 | 0.334 |
| Tatoeba-test.mri-eng.mri.eng | 8.5 | 0.260 |
| Tatoeba-test.msa-eng.msa.eng | 33.9 | 0.520 |
| Tatoeba-test.multi.eng | 34.7 | 0.518 |
| Tatoeba-test.mwl-eng.mwl.eng | 37.4 | 0.630 |
| Tatoeba-test.mya-eng.mya.eng | 15.5 | 0.335 |
| Tatoeba-test.myv-eng.myv.eng | 0.8 | 0.118 |
| Tatoeba-test.nau-eng.nau.eng | 9.0 | 0.186 |
| Tatoeba-test.nav-eng.nav.eng | 1.3 | 0.144 |
| Tatoeba-test.nds-eng.nds.eng | 30.7 | 0.495 |
| Tatoeba-test.nep-eng.nep.eng | 3.5 | 0.168 |
| Tatoeba-test.niu-eng.niu.eng | 42.7 | 0.492 |
| Tatoeba-test.nld-eng.nld.eng | 47.9 | 0.640 |
| Tatoeba-test.nog-eng.nog.eng | 12.7 | 0.284 |
| Tatoeba-test.non-eng.non.eng | 43.8 | 0.586 |
| Tatoeba-test.nor-eng.nor.eng | 45.5 | 0.619 |
| Tatoeba-test.nov-eng.nov.eng | 26.9 | 0.472 |
| Tatoeba-test.nya-eng.nya.eng | 33.2 | 0.456 |
| Tatoeba-test.oci-eng.oci.eng | 17.9 | 0.370 |
| Tatoeba-test.ori-eng.ori.eng | 14.6 | 0.305 |
| Tatoeba-test.orv-eng.orv.eng | 11.0 | 0.283 |
| Tatoeba-test.oss-eng.oss.eng | 4.1 | 0.211 |
| Tatoeba-test.ota-eng.ota.eng | 4.1 | 0.216 |
| Tatoeba-test.pag-eng.pag.eng | 24.3 | 0.468 |
| Tatoeba-test.pan-eng.pan.eng | 16.4 | 0.358 |
| Tatoeba-test.pap-eng.pap.eng | 53.2 | 0.628 |
| Tatoeba-test.pau-eng.pau.eng | 3.7 | 0.173 |
| Tatoeba-test.pdc-eng.pdc.eng | 45.3 | 0.569 |
| Tatoeba-test.pms-eng.pms.eng | 14.0 | 0.345 |
| Tatoeba-test.pol-eng.pol.eng | 41.7 | 0.588 |
| Tatoeba-test.por-eng.por.eng | 51.4 | 0.669 |
| Tatoeba-test.ppl-eng.ppl.eng | 0.4 | 0.134 |
| Tatoeba-test.prg-eng.prg.eng | 4.1 | 0.198 |
| Tatoeba-test.pus-eng.pus.eng | 6.7 | 0.233 |
| Tatoeba-test.quc-eng.quc.eng | 3.5 | 0.091 |
| Tatoeba-test.qya-eng.qya.eng | 0.2 | 0.090 |
| Tatoeba-test.rap-eng.rap.eng | 17.5 | 0.230 |
| Tatoeba-test.rif-eng.rif.eng | 4.2 | 0.164 |
| Tatoeba-test.roh-eng.roh.eng | 24.6 | 0.464 |
| Tatoeba-test.rom-eng.rom.eng | 3.4 | 0.212 |
| Tatoeba-test.ron-eng.ron.eng | 45.2 | 0.620 |
| Tatoeba-test.rue-eng.rue.eng | 21.4 | 0.390 |
| Tatoeba-test.run-eng.run.eng | 24.5 | 0.392 |
| Tatoeba-test.rus-eng.rus.eng | 42.7 | 0.591 |
| Tatoeba-test.sag-eng.sag.eng | 3.4 | 0.187 |
| Tatoeba-test.sah-eng.sah.eng | 5.0 | 0.177 |
| Tatoeba-test.san-eng.san.eng | 2.0 | 0.172 |
| Tatoeba-test.scn-eng.scn.eng | 35.8 | 0.410 |
| Tatoeba-test.sco-eng.sco.eng | 34.6 | 0.520 |
| Tatoeba-test.sgs-eng.sgs.eng | 21.8 | 0.299 |
| Tatoeba-test.shs-eng.shs.eng | 1.8 | 0.122 |
| Tatoeba-test.shy-eng.shy.eng | 1.4 | 0.104 |
| Tatoeba-test.sin-eng.sin.eng | 20.6 | 0.429 |
| Tatoeba-test.sjn-eng.sjn.eng | 1.2 | 0.095 |
| Tatoeba-test.slv-eng.slv.eng | 37.0 | 0.545 |
| Tatoeba-test.sma-eng.sma.eng | 4.4 | 0.147 |
| Tatoeba-test.sme-eng.sme.eng | 8.9 | 0.229 |
| Tatoeba-test.smo-eng.smo.eng | 37.7 | 0.483 |
| Tatoeba-test.sna-eng.sna.eng | 18.0 | 0.359 |
| Tatoeba-test.snd-eng.snd.eng | 28.1 | 0.444 |
| Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 |
| Tatoeba-test.spa-eng.spa.eng | 47.9 | 0.645 |
| Tatoeba-test.sqi-eng.sqi.eng | 46.9 | 0.634 |
| Tatoeba-test.stq-eng.stq.eng | 8.1 | 0.379 |
| Tatoeba-test.sun-eng.sun.eng | 23.8 | 0.369 |
| Tatoeba-test.swa-eng.swa.eng | 6.5 | 0.193 |
| Tatoeba-test.swe-eng.swe.eng | 51.4 | 0.655 |
| Tatoeba-test.swg-eng.swg.eng | 18.5 | 0.342 |
| Tatoeba-test.tah-eng.tah.eng | 25.6 | 0.249 |
| Tatoeba-test.tam-eng.tam.eng | 29.1 | 0.437 |
| Tatoeba-test.tat-eng.tat.eng | 12.9 | 0.327 |
| Tatoeba-test.tel-eng.tel.eng | 21.2 | 0.386 |
| Tatoeba-test.tet-eng.tet.eng | 9.2 | 0.215 |
| Tatoeba-test.tgk-eng.tgk.eng | 12.7 | 0.374 |
| Tatoeba-test.tha-eng.tha.eng | 36.3 | 0.531 |
| Tatoeba-test.tir-eng.tir.eng | 9.1 | 0.267 |
| Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.084 |
| Tatoeba-test.tly-eng.tly.eng | 2.1 | 0.128 |
| Tatoeba-test.toi-eng.toi.eng | 5.3 | 0.150 |
| Tatoeba-test.ton-eng.ton.eng | 39.5 | 0.473 |
| Tatoeba-test.tpw-eng.tpw.eng | 1.5 | 0.160 |
| Tatoeba-test.tso-eng.tso.eng | 44.7 | 0.526 |
| Tatoeba-test.tuk-eng.tuk.eng | 18.6 | 0.401 |
| Tatoeba-test.tur-eng.tur.eng | 40.5 | 0.573 |
| Tatoeba-test.tvl-eng.tvl.eng | 55.0 | 0.593 |
| Tatoeba-test.tyv-eng.tyv.eng | 19.1 | 0.477 |
| Tatoeba-test.tzl-eng.tzl.eng | 17.7 | 0.333 |
| Tatoeba-test.udm-eng.udm.eng | 3.4 | 0.217 |
| Tatoeba-test.uig-eng.uig.eng | 11.4 | 0.289 |
| Tatoeba-test.ukr-eng.ukr.eng | 43.1 | 0.595 |
| Tatoeba-test.umb-eng.umb.eng | 9.2 | 0.260 |
| Tatoeba-test.urd-eng.urd.eng | 23.2 | 0.426 |
| Tatoeba-test.uzb-eng.uzb.eng | 19.0 | 0.342 |
| Tatoeba-test.vec-eng.vec.eng | 41.1 | 0.409 |
| Tatoeba-test.vie-eng.vie.eng | 30.6 | 0.481 |
| Tatoeba-test.vol-eng.vol.eng | 1.8 | 0.143 |
| Tatoeba-test.war-eng.war.eng | 15.9 | 0.352 |
| Tatoeba-test.wln-eng.wln.eng | 12.6 | 0.291 |
| Tatoeba-test.wol-eng.wol.eng | 4.4 | 0.138 |
| Tatoeba-test.xal-eng.xal.eng | 0.9 | 0.153 |
| Tatoeba-test.xho-eng.xho.eng | 35.4 | 0.513 |
| Tatoeba-test.yid-eng.yid.eng | 19.4 | 0.387 |
| Tatoeba-test.yor-eng.yor.eng | 19.3 | 0.327 |
| Tatoeba-test.zho-eng.zho.eng | 25.8 | 0.448 |
| Tatoeba-test.zul-eng.zul.eng | 40.9 | 0.567 |
| Tatoeba-test.zza-eng.zza.eng | 1.6 | 0.125 |
### System Info:
- hf_name: mul-eng
- source_languages: mul
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/mul-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ca', 'es', 'os', 'eo', 'ro', 'fy', 'cy', 'is', 'lb', 'su', 'an', 'sq', 'fr', 'ht', 'rm', 'cv', 'ig', 'am', 'eu', 'tr', 'ps', 'af', 'ny', 'ch', 'uk', 'sl', 'lt', 'tk', 'sg', 'ar', 'lg', 'bg', 'be', 'ka', 'gd', 'ja', 'si', 'br', 'mh', 'km', 'th', 'ty', 'rw', 'te', 'mk', 'or', 'wo', 'kl', 'mr', 'ru', 'yo', 'hu', 'fo', 'zh', 'ti', 'co', 'ee', 'oc', 'sn', 'mt', 'ts', 'pl', 'gl', 'nb', 'bn', 'tt', 'bo', 'lo', 'id', 'gn', 'nv', 'hy', 'kn', 'to', 'io', 'so', 'vi', 'da', 'fj', 'gv', 'sm', 'nl', 'mi', 'pt', 'hi', 'se', 'as', 'ta', 'et', 'kw', 'ga', 'sv', 'ln', 'na', 'mn', 'gu', 'wa', 'lv', 'jv', 'el', 'my', 'ba', 'it', 'hr', 'ur', 'ce', 'nn', 'fi', 'mg', 'rn', 'xh', 'ab', 'de', 'cs', 'he', 'zu', 'yi', 'ml', 'mul', 'en']
- src_constituents: {'sjn_Latn', 'cat', 'nan', 'spa', 'ile_Latn', 'pap', 'mwl', 'uzb_Latn', 'mww', 'hil', 'lij', 'avk_Latn', 'lad_Latn', 'lat_Latn', 'bos_Latn', 'oss', 'epo', 'ron', 'fry', 'cym', 'toi_Latn', 'awa', 'swg', 'zsm_Latn', 'zho_Hant', 'gcf_Latn', 'uzb_Cyrl', 'isl', 'lfn_Latn', 'shs_Latn', 'nov_Latn', 'bho', 'ltz', 'lzh', 'kur_Latn', 'sun', 'arg', 'pes_Thaa', 'sqi', 'uig_Arab', 'csb_Latn', 'fra', 'hat', 'liv_Latn', 'non_Latn', 'sco', 'cmn_Hans', 'pnb', 'roh', 'chv', 'ibo', 'bul_Latn', 'amh', 'lfn_Cyrl', 'eus', 'fkv_Latn', 'tur', 'pus', 'afr', 'brx_Latn', 'nya', 'acm', 'ota_Latn', 'cha', 'ukr', 'xal', 'slv', 'lit', 'zho_Hans', 'tmw_Latn', 'kjh', 'ota_Arab', 'war', 'tuk', 'sag', 'myv', 'hsb', 'lzh_Hans', 'ara', 'tly_Latn', 'lug', 'brx', 'bul', 'bel', 'vol_Latn', 'kat', 'gan', 'got_Goth', 'vro', 'ext', 'afh_Latn', 'gla', 'jpn', 'udm', 'mai', 'ary', 'sin', 'tvl', 'hif_Latn', 'cjy_Hant', 'bre', 'ceb', 'mah', 'nob_Hebr', 'crh_Latn', 'prg_Latn', 'khm', 'ang_Latn', 'tha', 'tah', 'tzl', 'aln', 'kin', 'tel', 'ady', 'mkd', 'ori', 'wol', 'aze_Latn', 'jbo', 'niu', 'kal', 'mar', 'vie_Hani', 'arz', 'yue', 'kha', 'san_Deva', 'jbo_Latn', 'gos', 'hau_Latn', 'rus', 'quc', 'cmn', 'yor', 'hun', 'uig_Cyrl', 'fao', 'mnw', 'zho', 'orv_Cyrl', 'iba', 'bel_Latn', 'tir', 'afb', 'crh', 'mic', 'cos', 'swh', 'sah', 'krl', 'ewe', 'apc', 'zza', 'chr', 'grc_Grek', 'tpw_Latn', 'oci', 'mfe', 'sna', 'kir_Cyrl', 'tat_Latn', 'gom', 'ido_Latn', 'sgs', 'pau', 'tgk_Cyrl', 'nog', 'mlt', 'pdc', 'tso', 'srp_Cyrl', 'pol', 'ast', 'glg', 'pms', 'fuc', 'nob', 'qya', 'ben', 'tat', 'kab', 'min', 'srp_Latn', 'wuu', 'dtp', 'jbo_Cyrl', 'tet', 'bod', 'yue_Hans', 'zlm_Latn', 'lao', 'ind', 'grn', 'nav', 'kaz_Cyrl', 'rom', 'hye', 'kan', 'ton', 'ido', 'mhr', 'scn', 'som', 'rif_Latn', 'vie', 'enm_Latn', 'lmo', 'npi', 'pes', 'dan', 'fij', 'ina_Latn', 'cjy_Hans', 'jdt_Cyrl', 'gsw', 'glv', 'khm_Latn', 'smo', 'umb', 'sma', 'gil', 'nld', 'snd_Arab', 'arq', 'mri', 'kur_Arab', 'por', 'hin', 'shy_Latn', 'sme', 'rap', 'tyv', 'dsb', 'moh', 'asm', 'lad', 'yue_Hant', 'kpv', 'tam', 'est', 'frm_Latn', 'hoc_Latn', 'bam_Latn', 'kek_Latn', 'ksh', 'tlh_Latn', 'ltg', 'pan_Guru', 'hnj_Latn', 'cor', 'gle', 'swe', 'lin', 'qya_Latn', 'kum', 'mad', 'cmn_Hant', 'fuv', 'nau', 'mon', 'akl_Latn', 'guj', 'kaz_Latn', 'wln', 'tuk_Latn', 'jav_Java', 'lav', 'jav', 'ell', 'frr', 'mya', 'bak', 'rue', 'ita', 'hrv', 'izh', 'ilo', 'dws_Latn', 'urd', 'stq', 'tat_Arab', 'haw', 'che', 'pag', 'nno', 'fin', 'mlg', 'ppl_Latn', 'run', 'xho', 'abk', 'deu', 'hoc', 'lkt', 'lld_Latn', 'tzl_Latn', 'mdf', 'ike_Latn', 'ces', 'ldn_Latn', 'egl', 'heb', 'vec', 'zul', 'max_Latn', 'pes_Latn', 'yid', 'mal', 'nds'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/mul-eng/opus2m-2020-08-01.test.txt
- src_alpha3: mul
- tgt_alpha3: eng
- short_pair: mul-en
- chrF2_score: 0.518
- bleu: 34.7
- brevity_penalty: 1.0
- ref_len: 72346.0
- src_name: Multiple languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: mul
- tgt_alpha2: en
- prefer_old: False
- long_pair: mul-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.017688052728772163,
-0.009157471358776093,
-0.002984133781865239,
0.06427746266126633,
0.023443957790732384,
0.025435322895646095,
0.005475827492773533,
-0.011381593532860279,
-0.056347042322158813,
0.030250361189246178,
-0.005236182361841202,
-0.04793330654501915,
0.008051380515098572,
... |
Declan/NPR_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2020-04-29T13:24:09Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ng-en
* source languages: ng
* target languages: en
* OPUS readme: [ng-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ng-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ng-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ng.en | 27.3 | 0.443 |
| [
-0.009876247495412827,
-0.02579931728541851,
-0.00004069972055731341,
0.035657480359077454,
0.02452441304922104,
0.02565355785191059,
-0.0012406465830281377,
-0.003029034473001957,
-0.04687929153442383,
0.05984410271048546,
0.014131646603345871,
-0.007310541812330484,
0.009818937629461288,
... |
Declan/NPR_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- sn
- rw
- wo
- ig
- sg
- ee
- zu
- lg
- ts
- ln
- ny
- yo
- rn
- xh
- nic
- en
tags:
- translation
license: apache-2.0
---
### nic-eng
* source group: Niger-Kordofanian languages
* target group: English
* OPUS readme: [nic-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md)
* model: transformer
* source language(s): bam_Latn ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bam-eng.bam.eng | 2.4 | 0.090 |
| Tatoeba-test.ewe-eng.ewe.eng | 10.3 | 0.384 |
| Tatoeba-test.ful-eng.ful.eng | 1.2 | 0.114 |
| Tatoeba-test.ibo-eng.ibo.eng | 7.5 | 0.197 |
| Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.481 |
| Tatoeba-test.lin-eng.lin.eng | 3.1 | 0.185 |
| Tatoeba-test.lug-eng.lug.eng | 3.1 | 0.261 |
| Tatoeba-test.multi.eng | 21.3 | 0.377 |
| Tatoeba-test.nya-eng.nya.eng | 31.6 | 0.502 |
| Tatoeba-test.run-eng.run.eng | 24.9 | 0.420 |
| Tatoeba-test.sag-eng.sag.eng | 5.2 | 0.231 |
| Tatoeba-test.sna-eng.sna.eng | 20.1 | 0.374 |
| Tatoeba-test.swa-eng.swa.eng | 4.6 | 0.191 |
| Tatoeba-test.toi-eng.toi.eng | 4.8 | 0.122 |
| Tatoeba-test.tso-eng.tso.eng | 100.0 | 1.000 |
| Tatoeba-test.umb-eng.umb.eng | 9.0 | 0.246 |
| Tatoeba-test.wol-eng.wol.eng | 14.0 | 0.212 |
| Tatoeba-test.xho-eng.xho.eng | 38.2 | 0.558 |
| Tatoeba-test.yor-eng.yor.eng | 21.2 | 0.364 |
| Tatoeba-test.zul-eng.zul.eng | 42.3 | 0.589 |
### System Info:
- hf_name: nic-eng
- source_languages: nic
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nic-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'nic', 'en']
- src_constituents: {'bam_Latn', 'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nic-eng/opus2m-2020-08-01.test.txt
- src_alpha3: nic
- tgt_alpha3: eng
- short_pair: nic-en
- chrF2_score: 0.377
- bleu: 21.3
- brevity_penalty: 1.0
- ref_len: 15228.0
- src_name: Niger-Kordofanian languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: nic
- tgt_alpha2: en
- prefer_old: False
- long_pair: nic-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.014274601824581623,
-0.03213519975543022,
-0.006560842040926218,
0.03881780803203583,
0.04596826806664467,
0.029070571064949036,
-0.009520177729427814,
-0.010842317715287209,
-0.0730322077870369,
0.0472373366355896,
0.015404495410621166,
-0.022092292085289955,
-0.011306222528219223,
0.0... |
Declan/NPR_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2020-04-29T13:24:23Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-de
* source languages: niu
* target languages: de
* OPUS readme: [niu-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.de | 20.2 | 0.395 |
| [
-0.015714939683675766,
-0.028237955644726753,
-0.003650862257927656,
0.03522985801100731,
0.028173595666885376,
0.025933369994163513,
-0.0033503128215670586,
-0.010111442767083645,
-0.054129708558321,
0.05505703017115593,
0.009889332577586174,
-0.012173419818282127,
0.002862371737137437,
0... |
Declan/NPR_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-en
* source languages: niu
* target languages: en
* OPUS readme: [niu-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.en | 46.1 | 0.604 |
| [
-0.018503040075302124,
-0.02683086507022381,
-0.003797882003709674,
0.035543058067560196,
0.027207275852560997,
0.023232240229845047,
-0.0001685226452536881,
-0.010002982802689075,
-0.05176292359828949,
0.0520215667784214,
0.009026914834976196,
-0.00961305946111679,
0.007219785358756781,
0... |
Declan/NPR_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2020-04-29T13:25:02Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-es
* source languages: niu
* target languages: es
* OPUS readme: [niu-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.es | 24.2 | 0.419 |
| [
-0.016503432765603065,
-0.030126050114631653,
0.0006240505608730018,
0.033838190138339996,
0.029568180441856384,
0.02336534671485424,
-0.0021260161884129047,
-0.0077930898405611515,
-0.04991123452782631,
0.04976082965731621,
0.004110340960323811,
-0.011771328747272491,
0.005755576305091381,
... |
Declan/NPR_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-fi
* source languages: niu
* target languages: fi
* OPUS readme: [niu-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fi | 24.8 | 0.474 |
| [
-0.019145255908370018,
-0.03460810333490372,
0.005982504226267338,
0.03127499297261238,
0.023747384548187256,
0.025959506630897522,
0.0013625927967950702,
-0.01156339980661869,
-0.054394595324993134,
0.0467761754989624,
0.011423218064010143,
-0.005057724192738533,
0.007726071402430534,
0.0... |
Declan/NewYorkPost_model_v1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-04-29T13:25:45Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-fr
* source languages: niu
* target languages: fr
* OPUS readme: [niu-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.fr | 28.1 | 0.452 |
| [
-0.01587792858481407,
-0.030409149825572968,
-0.009664100594818592,
0.03642076998949051,
0.028190942481160164,
0.02195592224597931,
-0.005587087478488684,
-0.011057807132601738,
-0.049528125673532486,
0.0514792762696743,
0.008660140447318554,
-0.008103447034955025,
-0.0006988851237110794,
... |
Declan/NewYorkTimes_model_v1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-niu-sv
* source languages: niu
* target languages: sv
* OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.niu.sv | 29.2 | 0.478 |
| [
-0.020296955481171608,
-0.02800208143889904,
-0.0004619630053639412,
0.03505967929959297,
0.029836300760507584,
0.019655738025903702,
0.0010602000402286649,
-0.010119391605257988,
-0.05161811783909798,
0.05320703983306885,
0.0070902020670473576,
-0.008302842266857624,
0.01433554757386446,
... |
Declan/NewYorkTimes_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2020-08-19T00:30:56Z | ---
language:
- nl
- af
tags:
- translation
license: apache-2.0
---
### nld-afr
* source group: Dutch
* target group: Afrikaans
* OPUS readme: [nld-afr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): afr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.afr | 57.8 | 0.749 |
### System Info:
- hf_name: nld-afr
- source_languages: nld
- target_languages: afr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-afr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'af']
- src_constituents: {'nld'}
- tgt_constituents: {'afr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-afr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: afr
- short_pair: nl-af
- chrF2_score: 0.7490000000000001
- bleu: 57.8
- brevity_penalty: 1.0
- ref_len: 6823.0
- src_name: Dutch
- tgt_name: Afrikaans
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: af
- prefer_old: False
- long_pair: nld-afr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.010077356360852718,
-0.028661144897341728,
-0.021113477647304535,
0.03041764907538891,
0.056337837129831314,
0.027340250089764595,
-0.006431571673601866,
-0.00913204438984394,
-0.062125641852617264,
0.06126274913549423,
0.00542479520663619,
-0.0248210858553648,
-0.008526382967829704,
0.... |
Declan/NewYorkTimes_model_v3 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- nl
- ca
tags:
- translation
license: apache-2.0
---
### nld-cat
* source group: Dutch
* target group: Catalan
* OPUS readme: [nld-cat](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): cat
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.cat | 42.1 | 0.624 |
### System Info:
- hf_name: nld-cat
- source_languages: nld
- target_languages: cat
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-cat/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'ca']
- src_constituents: {'nld'}
- tgt_constituents: {'cat'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-cat/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: cat
- short_pair: nl-ca
- chrF2_score: 0.624
- bleu: 42.1
- brevity_penalty: 0.988
- ref_len: 3942.0
- src_name: Dutch
- tgt_name: Catalan
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: ca
- prefer_old: False
- long_pair: nld-cat
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.005626863334327936,
-0.027416978031396866,
-0.01231156475841999,
0.0336124561727047,
0.04637804254889488,
0.018203988671302795,
-0.0075033423490822315,
-0.00291338749229908,
-0.06645473837852478,
0.05703378841280937,
0.002979801967740059,
-0.02396712265908718,
-0.00725252740085125,
0.05... |
Declan/NewYorkTimes_model_v4 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-04-29T13:26:09Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-en
* source languages: nl
* target languages: en
* OPUS readme: [nl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-05.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.zip)
* test set translations: [opus-2019-12-05.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.test.txt)
* test set scores: [opus-2019-12-05.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-en/opus-2019-12-05.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.en | 60.9 | 0.749 |
| [
-0.014745294116437435,
-0.02932536043226719,
-0.0031230428721755743,
0.033975593745708466,
0.03364065662026405,
0.02261229045689106,
-0.003619448048993945,
-0.005673312582075596,
-0.04975339397788048,
0.055663131177425385,
0.010102935135364532,
-0.01566941849887371,
0.004346661269664764,
0... |
Declan/NewYorkTimes_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- nl
- eo
tags:
- translation
license: apache-2.0
---
### nld-epo
* source group: Dutch
* target group: Esperanto
* OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.epo | 16.1 | 0.355 |
### System Info:
- hf_name: nld-epo
- source_languages: nld
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'eo']
- src_constituents: {'nld'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: epo
- short_pair: nl-eo
- chrF2_score: 0.355
- bleu: 16.1
- brevity_penalty: 0.9359999999999999
- ref_len: 72293.0
- src_name: Dutch
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: nld-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.008252414874732494,
-0.02718559093773365,
-0.008263218216598034,
0.024252839386463165,
0.05173754319548607,
0.0200518649071455,
-0.0020881181117147207,
-0.006578876171261072,
-0.06720931082963943,
0.05667825788259506,
-0.0020866591949015856,
-0.026874128729104996,
-0.006192922592163086,
... |
Declan/NewYorkTimes_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-es
* source languages: nl
* target languages: es
* OPUS readme: [nl-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.es | 51.6 | 0.698 |
| [
-0.014513013884425163,
-0.030408797785639763,
0.0007849147659726441,
0.033385153859853745,
0.03298578038811684,
0.021984992548823357,
-0.004130885004997253,
-0.004898276180028915,
-0.04470285028219223,
0.052768684923648834,
0.005307465326040983,
-0.014904416166245937,
0.0023902282118797302,
... |
Declan/Politico_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-fi
* source languages: nl
* target languages: fi
* OPUS readme: [nl-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fi/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nl.fi | 28.6 | 0.569 |
| [
-0.014837845228612423,
-0.034830108284950256,
0.003696024650707841,
0.03207884356379509,
0.02570173144340515,
0.024299869313836098,
0.0008870684541761875,
-0.007497839163988829,
-0.05030042678117752,
0.05066826939582825,
0.015185488387942314,
-0.01356931310147047,
0.007328676525503397,
0.0... |
Declan/Politico_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-fr
* source languages: nl
* target languages: fr
* OPUS readme: [nl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-fr/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.nl.fr | 51.3 | 0.674 |
| [
-0.013367557898163795,
-0.031327251344919205,
-0.0068641165271401405,
0.03333158791065216,
0.02979978360235691,
0.022457381710410118,
-0.006649896968156099,
-0.007481626234948635,
-0.04641644284129143,
0.05263851583003998,
0.008845475502312183,
-0.012087211012840271,
-0.0031804360914975405,
... |
Declan/Politico_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- nl
- no
tags:
- translation
license: apache-2.0
---
### nld-nor
* source group: Dutch
* target group: Norwegian
* OPUS readme: [nld-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.nor | 36.1 | 0.562 |
### System Info:
- hf_name: nld-nor
- source_languages: nld
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'no']
- src_constituents: {'nld'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-nor/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: nor
- short_pair: nl-no
- chrF2_score: 0.562
- bleu: 36.1
- brevity_penalty: 0.966
- ref_len: 1459.0
- src_name: Dutch
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: no
- prefer_old: False
- long_pair: nld-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.005939009599387646,
-0.02441266179084778,
-0.008287610486149788,
0.03228762745857239,
0.04908674955368042,
0.01361771859228611,
-0.00927866343408823,
-0.008483870886266232,
-0.07245738804340363,
0.05994937941431999,
0.0046574948355555534,
-0.02704712189733982,
-0.009525666013360023,
0.0... |
Declan/Politico_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2020-04-29T13:28:09Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nl-sv
* source languages: nl
* target languages: sv
* OPUS readme: [nl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nl-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.nl.sv | 25.0 | 0.518 |
| [
-0.016289792954921722,
-0.023790040984749794,
-0.0035062653478235006,
0.03614030405879021,
0.03270995244383812,
0.01854857988655567,
0.0015373462811112404,
-0.005628923885524273,
-0.04361562803387642,
0.0604434534907341,
0.008953502401709557,
-0.01569250039756298,
0.015485565178096294,
0.0... |
Declan/Politico_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- nl
- uk
tags:
- translation
license: apache-2.0
---
### nld-ukr
* source group: Dutch
* target group: Ukrainian
* OPUS readme: [nld-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.ukr | 40.8 | 0.619 |
### System Info:
- hf_name: nld-ukr
- source_languages: nld
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'uk']
- src_constituents: {'nld'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-ukr/opus-2020-06-17.test.txt
- src_alpha3: nld
- tgt_alpha3: ukr
- short_pair: nl-uk
- chrF2_score: 0.619
- bleu: 40.8
- brevity_penalty: 0.992
- ref_len: 51674.0
- src_name: Dutch
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: nl
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nld-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.003756140824407339,
-0.022210560739040375,
-0.009159567765891552,
0.0355551578104496,
0.05042152479290962,
0.019048944115638733,
-0.004685736261308193,
-0.008032298646867275,
-0.07157211750745773,
0.06256214529275894,
0.007703975308686495,
-0.023668302223086357,
-0.008174215443432331,
0... |
Declan/Politico_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- no
- da
tags:
- translation
license: apache-2.0
---
### nor-dan
* source group: Norwegian
* target group: Danish
* OPUS readme: [nor-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): dan
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.dan | 65.0 | 0.792 |
### System Info:
- hf_name: nor-dan
- source_languages: nor
- target_languages: dan
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-dan/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'da']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'dan'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-dan/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: dan
- short_pair: no-da
- chrF2_score: 0.792
- bleu: 65.0
- brevity_penalty: 0.995
- ref_len: 9865.0
- src_name: Norwegian
- tgt_name: Danish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: da
- prefer_old: False
- long_pair: nor-dan
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0037233021575957537,
-0.026259953156113625,
-0.001711381832137704,
0.038611698895692825,
0.046390797942876816,
0.010916768573224545,
-0.008757166564464569,
-0.013993733562529087,
-0.07736072689294815,
0.05622780695557594,
0.008044052869081497,
-0.027441181242465973,
-0.014674975536763668,... |
Declan/Politico_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2020-08-19T00:31:15Z | ---
language:
- no
- de
tags:
- translation
license: apache-2.0
---
### nor-deu
* source group: Norwegian
* target group: German
* OPUS readme: [nor-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.deu | 29.6 | 0.541 |
### System Info:
- hf_name: nor-deu
- source_languages: nor
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'de']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-deu/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: deu
- short_pair: no-de
- chrF2_score: 0.541
- bleu: 29.6
- brevity_penalty: 0.96
- ref_len: 34575.0
- src_name: Norwegian
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: de
- prefer_old: False
- long_pair: nor-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0034642978571355343,
-0.028994202613830566,
-0.00792817771434784,
0.03348641097545624,
0.04195789620280266,
0.01805376075208187,
-0.01028008759021759,
-0.003376423381268978,
-0.07929584383964539,
0.06089841201901436,
0.007605806924402714,
-0.02438542991876602,
-0.014053188264369965,
0.0... |
Declan/Reuters_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- no
- es
tags:
- translation
license: apache-2.0
---
### nor-spa
* source group: Norwegian
* target group: Spanish
* OPUS readme: [nor-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.spa | 34.2 | 0.565 |
### System Info:
- hf_name: nor-spa
- source_languages: nor
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'es']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-spa/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: spa
- short_pair: no-es
- chrF2_score: 0.565
- bleu: 34.2
- brevity_penalty: 0.997
- ref_len: 7311.0
- src_name: Norwegian
- tgt_name: Spanish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: es
- prefer_old: False
- long_pair: nor-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0025055299047380686,
-0.028546111658215523,
-0.005699566565454006,
0.03552188724279404,
0.044638995081186295,
0.010674632154405117,
-0.0126752695068717,
0.0041550821624696255,
-0.07178249955177307,
0.05921526625752449,
-0.0006453240057453513,
-0.025140875950455666,
-0.013612005859613419,
... |
Declan/Reuters_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- no
- fi
tags:
- translation
license: apache-2.0
---
### nor-fin
* source group: Norwegian
* target group: Finnish
* OPUS readme: [nor-fin](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fin
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fin | 14.1 | 0.374 |
### System Info:
- hf_name: nor-fin
- source_languages: nor
- target_languages: fin
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fin/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fi']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fin'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fin/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fin
- short_pair: no-fi
- chrF2_score: 0.374
- bleu: 14.1
- brevity_penalty: 0.894
- ref_len: 13066.0
- src_name: Norwegian
- tgt_name: Finnish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fi
- prefer_old: False
- long_pair: nor-fin
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.009593235328793526,
-0.030256548896431923,
-0.002132430672645569,
0.02512279525399208,
0.04425358399748802,
0.015154367312788963,
-0.014807484112679958,
-0.0005174665711820126,
-0.0878617987036705,
0.051943592727184296,
0.011893326416611671,
-0.022704970091581345,
-0.00968160666525364,
... |
Declan/Reuters_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- no
- fr
tags:
- translation
license: apache-2.0
---
### nor-fra
* source group: Norwegian
* target group: French
* OPUS readme: [nor-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.fra | 39.1 | 0.578 |
### System Info:
- hf_name: nor-fra
- source_languages: nor
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'fr']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-fra/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: fra
- short_pair: no-fr
- chrF2_score: 0.578
- bleu: 39.1
- brevity_penalty: 0.987
- ref_len: 3205.0
- src_name: Norwegian
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: fr
- prefer_old: False
- long_pair: nor-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0013806021306663752,
-0.030455846339464188,
-0.009131287224590778,
0.030219389125704765,
0.045629870146512985,
0.007673716638237238,
-0.015098812058568,
-0.004308507777750492,
-0.07427843660116196,
0.056586265563964844,
0.006161955185234547,
-0.021635927259922028,
-0.017154322937130928,
... |
Declan/Reuters_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2020-08-19T00:31:41Z | ---
language:
- no
- nl
tags:
- translation
license: apache-2.0
---
### nor-nld
* source group: Norwegian
* target group: Dutch
* OPUS readme: [nor-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): nld
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nld | 40.2 | 0.596 |
### System Info:
- hf_name: nor-nld
- source_languages: nor
- target_languages: nld
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nld/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'nl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nld'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nld/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nld
- short_pair: no-nl
- chrF2_score: 0.596
- bleu: 40.2
- brevity_penalty: 0.9590000000000001
- ref_len: 1535.0
- src_name: Norwegian
- tgt_name: Dutch
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: nl
- prefer_old: False
- long_pair: nor-nld
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.006397186778485775,
-0.025105193257331848,
-0.010247600264847279,
0.031245924532413483,
0.047864895313978195,
0.013590848073363304,
-0.010316269472241402,
-0.008092471398413181,
-0.07038943469524384,
0.059435613453388214,
0.006256427615880966,
-0.024790816009044647,
-0.00840176921337843,
... |
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- no
tags:
- translation
license: apache-2.0
---
### nor-nor
* source group: Norwegian
* target group: Norwegian
* OPUS readme: [nor-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.nor | 58.4 | 0.784 |
### System Info:
- hf_name: nor-nor
- source_languages: nor
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-nor/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: nor
- short_pair: no-no
- chrF2_score: 0.784
- bleu: 58.4
- brevity_penalty: 0.988
- ref_len: 6351.0
- src_name: Norwegian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: no
- prefer_old: False
- long_pair: nor-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.006793338339775801,
-0.023915767669677734,
-0.0057326992973685265,
0.04099337384104729,
0.045326631516218185,
0.013117528520524502,
-0.010858308523893356,
-0.00687261251732707,
-0.07427886873483658,
0.062143489718437195,
0.010794027708470821,
-0.025009609758853912,
-0.0124049698933959,
... |
Declan/Reuters_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- no
- pl
tags:
- translation
license: apache-2.0
---
### nor-pol
* source group: Norwegian
* target group: Polish
* OPUS readme: [nor-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): pol
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.pol | 20.9 | 0.455 |
### System Info:
- hf_name: nor-pol
- source_languages: nor
- target_languages: pol
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-pol/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'pl']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'pol'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-pol/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: pol
- short_pair: no-pl
- chrF2_score: 0.455
- bleu: 20.9
- brevity_penalty: 0.941
- ref_len: 1828.0
- src_name: Norwegian
- tgt_name: Polish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: pl
- prefer_old: False
- long_pair: nor-pol
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.008063632994890213,
-0.030731501057744026,
-0.011225390248000622,
0.03322196379303932,
0.04511459544301033,
0.008742723613977432,
-0.00583108002319932,
-0.0015263563254848123,
-0.08081722259521484,
0.060514118522405624,
0.014258925803005695,
-0.025654228404164314,
-0.020934555679559708,
... |
Declan/Reuters_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- no
- ru
tags:
- translation
license: apache-2.0
---
### nor-rus
* source group: Norwegian
* target group: Russian
* OPUS readme: [nor-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.rus | 18.6 | 0.400 |
### System Info:
- hf_name: nor-rus
- source_languages: nor
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'ru']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-rus/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: rus
- short_pair: no-ru
- chrF2_score: 0.4
- bleu: 18.6
- brevity_penalty: 0.958
- ref_len: 10671.0
- src_name: Norwegian
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: ru
- prefer_old: False
- long_pair: nor-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.004866951145231724,
-0.031754523515701294,
-0.010692152194678783,
0.034637462347745895,
0.05090676620602608,
0.01738731376826763,
-0.01367071084678173,
-0.00590531388297677,
-0.08564648032188416,
0.059011783450841904,
0.01639915443956852,
-0.02606866881251335,
-0.011649020947515965,
0.0... |
Declan/WallStreetJournal_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- no
- sv
tags:
- translation
license: apache-2.0
---
### nor-swe
* source group: Norwegian
* target group: Swedish
* OPUS readme: [nor-swe](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md)
* model: transformer-align
* source language(s): nno nob
* target language(s): swe
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.swe | 63.7 | 0.773 |
### System Info:
- hf_name: nor-swe
- source_languages: nor
- target_languages: swe
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-swe/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'sv']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'swe'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-swe/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: swe
- short_pair: no-sv
- chrF2_score: 0.773
- bleu: 63.7
- brevity_penalty: 0.9670000000000001
- ref_len: 3672.0
- src_name: Norwegian
- tgt_name: Swedish
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: sv
- prefer_old: False
- long_pair: nor-swe
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0047042774967849255,
-0.027684565633535385,
-0.012066219002008438,
0.03303658962249756,
0.04356612265110016,
0.013943091034889221,
-0.012823216617107391,
-0.002338781487196684,
-0.07590147107839584,
0.06096922233700752,
0.008621709421277046,
-0.021824214607477188,
-0.009647670201957226,
... |
Declan/WallStreetJournal_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language:
- no
- uk
tags:
- translation
license: apache-2.0
---
### nor-ukr
* source group: Norwegian
* target group: Ukrainian
* OPUS readme: [nor-ukr](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md)
* model: transformer-align
* source language(s): nob
* target language(s): ukr
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nor.ukr | 16.6 | 0.384 |
### System Info:
- hf_name: nor-ukr
- source_languages: nor
- target_languages: ukr
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nor-ukr/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['no', 'uk']
- src_constituents: {'nob', 'nno'}
- tgt_constituents: {'ukr'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nor-ukr/opus-2020-06-17.test.txt
- src_alpha3: nor
- tgt_alpha3: ukr
- short_pair: no-uk
- chrF2_score: 0.384
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 3982.0
- src_name: Norwegian
- tgt_name: Ukrainian
- train_date: 2020-06-17
- src_alpha2: no
- tgt_alpha2: uk
- prefer_old: False
- long_pair: nor-ukr
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0020111529156565666,
-0.023189760744571686,
-0.005212053656578064,
0.038618724793195724,
0.04493231698870659,
0.012344416230916977,
-0.007739064283668995,
-0.004737725015729666,
-0.07976105809211731,
0.0632317066192627,
0.012378869578242302,
-0.022942328825592995,
-0.012279665097594261,
... |
Declan/WallStreetJournal_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-de
* source languages: nso
* target languages: de
* OPUS readme: [nso-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.de | 24.7 | 0.461 |
| [
-0.016437651589512825,
-0.03052113763988018,
-0.0030592274852097034,
0.03277765214443207,
0.02757256291806698,
0.024284256622195244,
-0.003556622425094247,
-0.00010406570800114423,
-0.05384734272956848,
0.057589657604694366,
0.011892163194715977,
-0.011746019124984741,
0.003475061384961009,
... |
Declan/WallStreetJournal_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-en
* source languages: nso
* target languages: en
* OPUS readme: [nso-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.en | 48.6 | 0.634 |
| [
-0.01659475639462471,
-0.0312497541308403,
-0.005377591121941805,
0.03367762640118599,
0.027331290766596794,
0.02095603570342064,
-0.0006992541020736098,
-0.0014065870782360435,
-0.05010896548628807,
0.05456961318850517,
0.012327395379543304,
-0.012065802700817585,
0.006860809400677681,
0.... |
Declan/WallStreetJournal_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-es
* source languages: nso
* target languages: es
* OPUS readme: [nso-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.es | 29.5 | 0.485 |
| [
-0.017321281135082245,
-0.03134017437696457,
0.00038755853893235326,
0.030756665393710136,
0.02888193540275097,
0.021100478246808052,
-0.0019410906825214624,
0.0010664100991562009,
-0.04704657569527626,
0.05295349657535553,
0.005833761300891638,
-0.011476745828986168,
0.006353678647428751,
... |
Declan/WallStreetJournal_model_v6 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-04-29T13:29:03Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-fi
* source languages: nso
* target languages: fi
* OPUS readme: [nso-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fi | 27.8 | 0.523 |
| [
-0.019492562860250473,
-0.03798802196979523,
0.0033833826892077923,
0.028897592797875404,
0.024960584938526154,
0.02192752994596958,
0.003770067123696208,
-0.001863514888100326,
-0.05351772904396057,
0.05016452074050903,
0.015173406340181828,
-0.008815486915409565,
0.008900830522179604,
0.... |
Declan/WallStreetJournal_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2020-04-29T13:29:30Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-fr
* source languages: nso
* target languages: fr
* OPUS readme: [nso-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.fr | 30.7 | 0.488 |
| [
-0.015580405481159687,
-0.03371619060635567,
-0.01004526112228632,
0.03309284895658493,
0.026833780109882355,
0.01920827105641365,
-0.005476207006722689,
-0.004258340690284967,
-0.04835217818617821,
0.053671568632125854,
0.00969518069177866,
-0.008812270127236843,
-0.00048133142990991473,
... |
Declan/test_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2020-04-29T13:29:46Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-nso-sv
* source languages: nso
* target languages: sv
* OPUS readme: [nso-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/nso-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/nso-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.nso.sv | 34.3 | 0.527 |
| [
-0.018607040867209435,
-0.0320408008992672,
0.0006847323384135962,
0.030258480459451675,
0.02911803312599659,
0.016045663505792618,
0.0013913714792579412,
0.0008882306283339858,
-0.05132368579506874,
0.05434031784534454,
0.011348170228302479,
-0.010492131114006042,
0.013417387381196022,
0.... |
Declan/test_push | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ny-de
* source languages: ny
* target languages: de
* OPUS readme: [ny-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ny-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ny-de/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ny.de | 23.9 | 0.440 |
| [
-0.014410963281989098,
-0.024773305281996727,
-0.005875651724636555,
0.02875613607466221,
0.023922856897115707,
0.030070947483181953,
0.002536807442083955,
-0.007974709384143353,
-0.05336679890751839,
0.05775323137640953,
0.010895995423197746,
-0.0128201674669981,
0.007330256048589945,
0.0... |
DeepChem/ChemBERTa-5M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-om-en
* source languages: om
* target languages: en
* OPUS readme: [om-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/om-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/om-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.om.en | 27.3 | 0.448 |
| [
-0.01934279128909111,
-0.025377899408340454,
-0.002584679052233696,
0.030236879363656044,
0.02697785757482052,
0.024054376408457756,
0.005378573201596737,
-0.0018659516936168075,
-0.04637248441576958,
0.05093495547771454,
0.01572311855852604,
-0.011801417917013168,
0.011899511329829693,
0.... |
DeepESP/gpt2-spanish-medium | [
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"es",
"dataset:ebooks",
"transformers",
"GPT-2",
"Spanish",
"ebooks",
"nlg",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 340 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pag-fi
* source languages: pag
* target languages: fi
* OPUS readme: [pag-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pag-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pag-fi/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pag.fi | 26.7 | 0.496 |
| [
-0.014024884440004826,
-0.03714059665799141,
0.0026832805015146732,
0.03056926466524601,
0.01879420131444931,
0.031069977208971977,
0.010323028080165386,
-0.00033207854721695185,
-0.049458231776952744,
0.05562502145767212,
0.019670572131872177,
-0.006326952949166298,
0.013539229519665241,
... |
DeepPavlov/bert-base-multilingual-cased-sentence | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"multilingual",
"arxiv:1704.05426",
"arxiv:1809.05053",
"arxiv:1908.10084",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 140 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pap-es
* source languages: pap
* target languages: es
* OPUS readme: [pap-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pap-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pap-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.pap.es | 32.3 | 0.518 |
| [
-0.008937752805650234,
-0.0351497121155262,
0.0050172461196780205,
0.03720295801758766,
0.02972976677119732,
0.02478216215968132,
0.0015884300228208303,
0.006500047631561756,
-0.049265846610069275,
0.056571103632450104,
0.009391220286488533,
-0.020431099459528923,
0.00972607359290123,
0.05... |
DeepPavlov/xlm-roberta-large-en-ru | [
"pytorch",
"xlm-roberta",
"feature-extraction",
"en",
"ru",
"transformers"
] | feature-extraction | {
"architectures": [
"XLMRobertaModel"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 190 | 2020-04-29T13:35:51Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-pl-en
* source languages: pl
* target languages: en
* OPUS readme: [pl-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.en | 54.9 | 0.701 |
| [
-0.014878750778734684,
-0.03131267800927162,
-0.0027609984390437603,
0.038057394325733185,
0.035819362848997116,
0.022729696705937386,
0.000891355681233108,
-0.002801948692649603,
-0.05732784792780876,
0.054170865565538406,
0.012372812256217003,
-0.015614180825650692,
-0.0022084603551775217,... |
Dilmk2/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ru-fi
* source languages: ru
* target languages: fi
* OPUS readme: [ru-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ru-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-04-12.zip](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.zip)
* test set translations: [opus-2020-04-12.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.test.txt)
* test set scores: [opus-2020-04-12.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ru-fi/opus-2020-04-12.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.ru.fi | 40.1 | 0.646 |
| [
-0.013932598754763603,
-0.03682563453912735,
0.007220860105007887,
0.03382923826575279,
0.035664770752191544,
0.026854924857616425,
-0.002130958717316389,
-0.007329304702579975,
-0.06320255994796753,
0.05062665045261383,
0.020215345546603203,
-0.014290020801126957,
0.0005655547138303518,
0... |
Dimedrolza/DialoGPT-small-cyberpunk | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
language:
- ru
- hy
tags:
- translation
license: apache-2.0
---
### rus-hye
* source group: Russian
* target group: Armenian
* OPUS readme: [rus-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): hye hye_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.hye | 21.7 | 0.494 |
### System Info:
- hf_name: rus-hye
- source_languages: rus
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'hy']
- src_constituents: {'rus'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-hye/opus-2020-06-16.test.txt
- src_alpha3: rus
- tgt_alpha3: hye
- short_pair: ru-hy
- chrF2_score: 0.494
- bleu: 21.7
- brevity_penalty: 1.0
- ref_len: 1602.0
- src_name: Russian
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: ru
- tgt_alpha2: hy
- prefer_old: False
- long_pair: rus-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.009973330423235893,
-0.02847977541387081,
-0.011128401383757591,
0.0441533699631691,
0.06105821579694748,
0.024935860186815262,
-0.006574124097824097,
-0.008188750594854355,
-0.08038082718849182,
0.055859968066215515,
0.016238147392868996,
-0.027629230171442032,
-0.0060103097930550575,
... |
DingleyMaillotUrgell/homer-bot | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- ru
- lt
tags:
- translation
license: apache-2.0
---
### rus-lit
* source group: Russian
* target group: Lithuanian
* OPUS readme: [rus-lit](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): lit
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.lit | 43.5 | 0.675 |
### System Info:
- hf_name: rus-lit
- source_languages: rus
- target_languages: lit
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-lit/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'lt']
- src_constituents: {'rus'}
- tgt_constituents: {'lit'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-lit/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: lit
- short_pair: ru-lt
- chrF2_score: 0.675
- bleu: 43.5
- brevity_penalty: 0.937
- ref_len: 14406.0
- src_name: Russian
- tgt_name: Lithuanian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: lt
- prefer_old: False
- long_pair: rus-lit
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
0.002479123417288065,
-0.03401516005396843,
-0.013239961117506027,
0.03705727308988571,
0.05401303246617317,
0.021004803478717804,
-0.011123732663691044,
-0.008456745184957981,
-0.09294487535953522,
0.05809081345796585,
0.024929439648985863,
-0.01667696237564087,
-0.00963760819286108,
0.05... |
Dizoid/Lll | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- ru
- no
tags:
- translation
license: apache-2.0
---
### rus-nor
* source group: Russian
* target group: Norwegian
* OPUS readme: [rus-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): nno nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.nor | 20.3 | 0.418 |
### System Info:
- hf_name: rus-nor
- source_languages: rus
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'no']
- src_constituents: {'rus'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-nor/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: nor
- short_pair: ru-no
- chrF2_score: 0.418
- bleu: 20.3
- brevity_penalty: 0.946
- ref_len: 11686.0
- src_name: Russian
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: no
- prefer_old: False
- long_pair: rus-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.005420933477580547,
-0.03108110837638378,
-0.0100853918120265,
0.04124270752072334,
0.058753278106451035,
0.01936347596347332,
-0.011229711584746838,
-0.007550706621259451,
-0.08301769196987152,
0.05976785346865654,
0.01823735050857067,
-0.026765206828713417,
-0.014275572262704372,
0.05... |
Dkwkk/Da | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- ru
- sl
tags:
- translation
license: apache-2.0
---
### rus-slv
* source group: Russian
* target group: Slovenian
* OPUS readme: [rus-slv](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md)
* model: transformer-align
* source language(s): rus
* target language(s): slv
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.rus.slv | 32.3 | 0.492 |
### System Info:
- hf_name: rus-slv
- source_languages: rus
- target_languages: slv
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-slv/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ru', 'sl']
- src_constituents: {'rus'}
- tgt_constituents: {'slv'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-slv/opus-2020-06-17.test.txt
- src_alpha3: rus
- tgt_alpha3: slv
- short_pair: ru-sl
- chrF2_score: 0.49200000000000005
- bleu: 32.3
- brevity_penalty: 0.992
- ref_len: 2135.0
- src_name: Russian
- tgt_name: Slovenian
- train_date: 2020-06-17
- src_alpha2: ru
- tgt_alpha2: sl
- prefer_old: False
- long_pair: rus-slv
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0031977370381355286,
-0.03205454722046852,
-0.01500378455966711,
0.03724714368581772,
0.058694060891866684,
0.021672455593943596,
-0.00623663142323494,
0.008549833670258522,
-0.08726894855499268,
0.06397119164466858,
0.01079593412578106,
-0.02540377900004387,
-0.003716690232977271,
0.05... |
Waynehillsdev/Waynehills-STT-doogie-server | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 61 | null | ---
language:
- mt
- ar
- he
- ti
- am
- sem
tags:
- translation
license: apache-2.0
---
### sem-sem
* source group: Semitic languages
* target group: Semitic languages
* OPUS readme: [sem-sem](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md)
* model: transformer
* source language(s): apc ara arq arz heb mlt
* target language(s): apc ara arq arz heb mlt
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ara-ara.ara.ara | 4.2 | 0.200 |
| Tatoeba-test.ara-heb.ara.heb | 34.0 | 0.542 |
| Tatoeba-test.ara-mlt.ara.mlt | 16.6 | 0.513 |
| Tatoeba-test.heb-ara.heb.ara | 18.8 | 0.477 |
| Tatoeba-test.mlt-ara.mlt.ara | 20.7 | 0.388 |
| Tatoeba-test.multi.multi | 27.1 | 0.507 |
### System Info:
- hf_name: sem-sem
- source_languages: sem
- target_languages: sem
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sem-sem/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['mt', 'ar', 'he', 'ti', 'am', 'sem']
- src_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- tgt_constituents: {'apc', 'mlt', 'arz', 'ara', 'heb', 'tir', 'arq', 'afb', 'amh', 'acm', 'ary'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sem-sem/opus-2020-07-27.test.txt
- src_alpha3: sem
- tgt_alpha3: sem
- short_pair: sem-sem
- chrF2_score: 0.507
- bleu: 27.1
- brevity_penalty: 0.972
- ref_len: 13472.0
- src_name: Semitic languages
- tgt_name: Semitic languages
- train_date: 2020-07-27
- src_alpha2: sem
- tgt_alpha2: sem
- prefer_old: False
- long_pair: sem-sem
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.007999188266694546,
-0.031555574387311935,
-0.00972174946218729,
0.05574056878685951,
0.05029028654098511,
0.029623758047819138,
0.00239549670368433,
-0.00788784958422184,
-0.06283779442310333,
0.05803918465971947,
-0.006895374972373247,
-0.024152284488081932,
-0.004546016920357943,
0.0... |
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-en
* source languages: sg
* target languages: en
* OPUS readme: [sg-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.en | 32.0 | 0.477 |
| [
-0.011600869707763195,
-0.031874898821115494,
-0.00771194975823164,
0.03799346834421158,
0.03127836063504219,
0.02427729405462742,
-0.00025131675647571683,
0.0013559506041929126,
-0.05050203576683998,
0.059904105961322784,
0.015850752592086792,
-0.010006261058151722,
0.008900017477571964,
... |
Doohae/p_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-fr
* source languages: sg
* target languages: fr
* OPUS readme: [sg-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-fr/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.fr | 24.9 | 0.420 |
| [
-0.011915980838239193,
-0.030335774645209312,
-0.01175461895763874,
0.03746332973241806,
0.030597547069191933,
0.02381996624171734,
-0.0066636730916798115,
-0.004188832361251116,
-0.04823022708296776,
0.054302193224430084,
0.013144970871508121,
-0.00997588224709034,
-0.000036820900277234614,... |
Doohae/q_encoder | [
"pytorch"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sg-sv
* source languages: sg
* target languages: sv
* OPUS readme: [sg-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sg-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sg-sv/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sg.sv | 25.3 | 0.428 |
| [
-0.011190415360033512,
-0.02893824316561222,
-0.0034819338470697403,
0.03926616534590721,
0.033071424812078476,
0.02339177578687668,
-0.00019190202874597162,
0.0035280254669487476,
-0.050006963312625885,
0.05817670375108719,
0.015418151393532753,
-0.010342372581362724,
0.014272774569690228,
... |
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
] | question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- sh
- eo
tags:
- translation
license: apache-2.0
---
### hbs-epo
* source group: Serbo-Croatian
* target group: Esperanto
* OPUS readme: [hbs-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md)
* model: transformer-align
* source language(s): bos_Latn hrv srp_Cyrl srp_Latn
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.hbs.epo | 18.7 | 0.383 |
### System Info:
- hf_name: hbs-epo
- source_languages: hbs
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['sh', 'eo']
- src_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt
- src_alpha3: hbs
- tgt_alpha3: epo
- short_pair: sh-eo
- chrF2_score: 0.38299999999999995
- bleu: 18.7
- brevity_penalty: 0.9990000000000001
- ref_len: 18457.0
- src_name: Serbo-Croatian
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: sh
- tgt_alpha2: eo
- prefer_old: False
- long_pair: hbs-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.003171768272295594,
-0.02939082868397236,
-0.011697310023009777,
0.03275523707270622,
0.05722969397902489,
0.02068343572318554,
-0.008889194577932358,
0.009355984628200531,
-0.07429265230894089,
0.055848926305770874,
-0.00013273513468448073,
-0.02748764306306839,
-0.010673108510673046,
... |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-75 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
language:
- be
- hr
- mk
- cs
- ru
- pl
- bg
- uk
- sl
- sla
- en
tags:
- translation
license: apache-2.0
---
### sla-eng
* source group: Slavic languages
* target group: English
* OPUS readme: [sla-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newssyscomb2009-ceseng.ces.eng | 26.7 | 0.542 |
| newstest2009-ceseng.ces.eng | 25.2 | 0.534 |
| newstest2010-ceseng.ces.eng | 25.9 | 0.545 |
| newstest2011-ceseng.ces.eng | 26.8 | 0.544 |
| newstest2012-ceseng.ces.eng | 25.6 | 0.536 |
| newstest2012-ruseng.rus.eng | 32.5 | 0.588 |
| newstest2013-ceseng.ces.eng | 28.8 | 0.556 |
| newstest2013-ruseng.rus.eng | 26.4 | 0.532 |
| newstest2014-csen-ceseng.ces.eng | 31.4 | 0.591 |
| newstest2014-ruen-ruseng.rus.eng | 29.6 | 0.576 |
| newstest2015-encs-ceseng.ces.eng | 28.2 | 0.545 |
| newstest2015-enru-ruseng.rus.eng | 28.1 | 0.551 |
| newstest2016-encs-ceseng.ces.eng | 30.0 | 0.567 |
| newstest2016-enru-ruseng.rus.eng | 27.4 | 0.548 |
| newstest2017-encs-ceseng.ces.eng | 26.5 | 0.537 |
| newstest2017-enru-ruseng.rus.eng | 31.0 | 0.574 |
| newstest2018-encs-ceseng.ces.eng | 27.9 | 0.548 |
| newstest2018-enru-ruseng.rus.eng | 26.8 | 0.545 |
| newstest2019-ruen-ruseng.rus.eng | 29.1 | 0.562 |
| Tatoeba-test.bel-eng.bel.eng | 42.5 | 0.609 |
| Tatoeba-test.bul-eng.bul.eng | 55.4 | 0.697 |
| Tatoeba-test.ces-eng.ces.eng | 53.1 | 0.688 |
| Tatoeba-test.csb-eng.csb.eng | 23.1 | 0.446 |
| Tatoeba-test.dsb-eng.dsb.eng | 31.1 | 0.467 |
| Tatoeba-test.hbs-eng.hbs.eng | 56.1 | 0.702 |
| Tatoeba-test.hsb-eng.hsb.eng | 46.2 | 0.597 |
| Tatoeba-test.mkd-eng.mkd.eng | 54.5 | 0.680 |
| Tatoeba-test.multi.eng | 53.2 | 0.683 |
| Tatoeba-test.orv-eng.orv.eng | 12.1 | 0.292 |
| Tatoeba-test.pol-eng.pol.eng | 51.1 | 0.671 |
| Tatoeba-test.rue-eng.rue.eng | 19.6 | 0.389 |
| Tatoeba-test.rus-eng.rus.eng | 54.1 | 0.686 |
| Tatoeba-test.slv-eng.slv.eng | 43.4 | 0.610 |
| Tatoeba-test.ukr-eng.ukr.eng | 53.8 | 0.685 |
### System Info:
- hf_name: sla-eng
- source_languages: sla
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt
- src_alpha3: sla
- tgt_alpha3: eng
- short_pair: sla-en
- chrF2_score: 0.6829999999999999
- bleu: 53.2
- brevity_penalty: 0.9740000000000001
- ref_len: 70897.0
- src_name: Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: sla
- tgt_alpha2: en
- prefer_old: False
- long_pair: sla-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.005142298527061939,
-0.027162425220012665,
-0.00841364823281765,
0.04326607286930084,
0.05799520015716553,
0.02400721050798893,
-0.005734595004469156,
0.0028300730045884848,
-0.08110072463750839,
0.05163151025772095,
0.011832048185169697,
-0.02765386924147606,
-0.011044648475944996,
0.0... |
DoyyingFace/bert-asian-hate-tweets-asian-unclean-with-clean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
language:
- be
- hr
- mk
- cs
- ru
- pl
- bg
- uk
- sl
- sla
tags:
- translation
license: apache-2.0
---
### sla-sla
* source group: Slavic languages
* target group: Slavic languages
* OPUS readme: [sla-sla](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md)
* model: transformer
* source language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* target language(s): bel bel_Latn bos_Latn bul bul_Latn ces dsb hrv hsb mkd orv_Cyrl pol rus slv srp_Cyrl srp_Latn ukr
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-07-27.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip)
* test set translations: [opus-2020-07-27.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt)
* test set scores: [opus-2020-07-27.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-cesrus.ces.rus | 15.9 | 0.437 |
| newstest2012-rusces.rus.ces | 13.6 | 0.403 |
| newstest2013-cesrus.ces.rus | 19.8 | 0.473 |
| newstest2013-rusces.rus.ces | 17.9 | 0.449 |
| Tatoeba-test.bel-bul.bel.bul | 100.0 | 1.000 |
| Tatoeba-test.bel-ces.bel.ces | 33.5 | 0.630 |
| Tatoeba-test.bel-hbs.bel.hbs | 45.4 | 0.644 |
| Tatoeba-test.bel-mkd.bel.mkd | 19.3 | 0.531 |
| Tatoeba-test.bel-pol.bel.pol | 46.9 | 0.681 |
| Tatoeba-test.bel-rus.bel.rus | 58.5 | 0.767 |
| Tatoeba-test.bel-ukr.bel.ukr | 55.1 | 0.743 |
| Tatoeba-test.bul-bel.bul.bel | 10.7 | 0.423 |
| Tatoeba-test.bul-ces.bul.ces | 36.9 | 0.585 |
| Tatoeba-test.bul-hbs.bul.hbs | 53.7 | 0.807 |
| Tatoeba-test.bul-mkd.bul.mkd | 31.9 | 0.715 |
| Tatoeba-test.bul-pol.bul.pol | 38.6 | 0.607 |
| Tatoeba-test.bul-rus.bul.rus | 44.8 | 0.655 |
| Tatoeba-test.bul-ukr.bul.ukr | 49.9 | 0.691 |
| Tatoeba-test.ces-bel.ces.bel | 30.9 | 0.585 |
| Tatoeba-test.ces-bul.ces.bul | 75.8 | 0.859 |
| Tatoeba-test.ces-hbs.ces.hbs | 50.0 | 0.661 |
| Tatoeba-test.ces-hsb.ces.hsb | 7.9 | 0.246 |
| Tatoeba-test.ces-mkd.ces.mkd | 24.6 | 0.569 |
| Tatoeba-test.ces-pol.ces.pol | 44.3 | 0.652 |
| Tatoeba-test.ces-rus.ces.rus | 50.8 | 0.690 |
| Tatoeba-test.ces-slv.ces.slv | 4.9 | 0.240 |
| Tatoeba-test.ces-ukr.ces.ukr | 52.9 | 0.687 |
| Tatoeba-test.dsb-pol.dsb.pol | 16.3 | 0.367 |
| Tatoeba-test.dsb-rus.dsb.rus | 12.7 | 0.245 |
| Tatoeba-test.hbs-bel.hbs.bel | 32.9 | 0.531 |
| Tatoeba-test.hbs-bul.hbs.bul | 100.0 | 1.000 |
| Tatoeba-test.hbs-ces.hbs.ces | 40.3 | 0.626 |
| Tatoeba-test.hbs-mkd.hbs.mkd | 19.3 | 0.535 |
| Tatoeba-test.hbs-pol.hbs.pol | 45.0 | 0.650 |
| Tatoeba-test.hbs-rus.hbs.rus | 53.5 | 0.709 |
| Tatoeba-test.hbs-ukr.hbs.ukr | 50.7 | 0.684 |
| Tatoeba-test.hsb-ces.hsb.ces | 17.9 | 0.366 |
| Tatoeba-test.mkd-bel.mkd.bel | 23.6 | 0.548 |
| Tatoeba-test.mkd-bul.mkd.bul | 54.2 | 0.833 |
| Tatoeba-test.mkd-ces.mkd.ces | 12.1 | 0.371 |
| Tatoeba-test.mkd-hbs.mkd.hbs | 19.3 | 0.577 |
| Tatoeba-test.mkd-pol.mkd.pol | 53.7 | 0.833 |
| Tatoeba-test.mkd-rus.mkd.rus | 34.2 | 0.745 |
| Tatoeba-test.mkd-ukr.mkd.ukr | 42.7 | 0.708 |
| Tatoeba-test.multi.multi | 48.5 | 0.672 |
| Tatoeba-test.orv-pol.orv.pol | 10.1 | 0.355 |
| Tatoeba-test.orv-rus.orv.rus | 10.6 | 0.275 |
| Tatoeba-test.orv-ukr.orv.ukr | 7.5 | 0.230 |
| Tatoeba-test.pol-bel.pol.bel | 29.8 | 0.533 |
| Tatoeba-test.pol-bul.pol.bul | 36.8 | 0.578 |
| Tatoeba-test.pol-ces.pol.ces | 43.6 | 0.626 |
| Tatoeba-test.pol-dsb.pol.dsb | 0.9 | 0.097 |
| Tatoeba-test.pol-hbs.pol.hbs | 42.4 | 0.644 |
| Tatoeba-test.pol-mkd.pol.mkd | 19.3 | 0.535 |
| Tatoeba-test.pol-orv.pol.orv | 0.7 | 0.109 |
| Tatoeba-test.pol-rus.pol.rus | 49.6 | 0.680 |
| Tatoeba-test.pol-slv.pol.slv | 7.3 | 0.262 |
| Tatoeba-test.pol-ukr.pol.ukr | 46.8 | 0.664 |
| Tatoeba-test.rus-bel.rus.bel | 34.4 | 0.577 |
| Tatoeba-test.rus-bul.rus.bul | 45.5 | 0.657 |
| Tatoeba-test.rus-ces.rus.ces | 48.0 | 0.659 |
| Tatoeba-test.rus-dsb.rus.dsb | 10.7 | 0.029 |
| Tatoeba-test.rus-hbs.rus.hbs | 44.6 | 0.655 |
| Tatoeba-test.rus-mkd.rus.mkd | 34.9 | 0.617 |
| Tatoeba-test.rus-orv.rus.orv | 0.1 | 0.073 |
| Tatoeba-test.rus-pol.rus.pol | 45.2 | 0.659 |
| Tatoeba-test.rus-slv.rus.slv | 30.4 | 0.476 |
| Tatoeba-test.rus-ukr.rus.ukr | 57.6 | 0.751 |
| Tatoeba-test.slv-ces.slv.ces | 42.5 | 0.604 |
| Tatoeba-test.slv-pol.slv.pol | 39.6 | 0.601 |
| Tatoeba-test.slv-rus.slv.rus | 47.2 | 0.638 |
| Tatoeba-test.slv-ukr.slv.ukr | 36.4 | 0.549 |
| Tatoeba-test.ukr-bel.ukr.bel | 36.9 | 0.597 |
| Tatoeba-test.ukr-bul.ukr.bul | 56.4 | 0.733 |
| Tatoeba-test.ukr-ces.ukr.ces | 52.1 | 0.686 |
| Tatoeba-test.ukr-hbs.ukr.hbs | 47.1 | 0.670 |
| Tatoeba-test.ukr-mkd.ukr.mkd | 20.8 | 0.548 |
| Tatoeba-test.ukr-orv.ukr.orv | 0.2 | 0.058 |
| Tatoeba-test.ukr-pol.ukr.pol | 50.1 | 0.695 |
| Tatoeba-test.ukr-rus.ukr.rus | 63.9 | 0.790 |
| Tatoeba-test.ukr-slv.ukr.slv | 14.5 | 0.288 |
### System Info:
- hf_name: sla-sla
- source_languages: sla
- target_languages: sla
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-sla/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla']
- src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- tgt_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'}
- src_multilingual: True
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-sla/opus-2020-07-27.test.txt
- src_alpha3: sla
- tgt_alpha3: sla
- short_pair: sla-sla
- chrF2_score: 0.672
- bleu: 48.5
- brevity_penalty: 1.0
- ref_len: 59320.0
- src_name: Slavic languages
- tgt_name: Slavic languages
- train_date: 2020-07-27
- src_alpha2: sla
- tgt_alpha2: sla
- prefer_old: False
- long_pair: sla-sla
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.005482249893248081,
-0.030721422284841537,
-0.0030798017978668213,
0.04654575511813164,
0.06056606397032738,
0.031110286712646484,
-0.008681686595082283,
0.006348902825266123,
-0.07810530811548233,
0.05628453567624092,
0.017983242869377136,
-0.02506597340106964,
-0.006666518282145262,
0... |
albert-base-v1 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38,156 | 2020-04-29T13:47:47Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sn-es
* source languages: sn
* target languages: es
* OPUS readme: [sn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.es | 32.5 | 0.509 |
| [
-0.019437288865447044,
-0.029604772105813026,
-0.0033538707066327333,
0.038362350314855576,
0.02702578343451023,
0.027485273778438568,
-0.006354611366987228,
0.0007086475379765034,
-0.04980509355664253,
0.051026877015829086,
0.002894024830311537,
-0.006697879638522863,
0.0025570436846464872,... |
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2020-04-29T13:48:05Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sn-fr
* source languages: sn
* target languages: fr
* OPUS readme: [sn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.fr | 30.8 | 0.491 |
| [
-0.016745252534747124,
-0.030812231823801994,
-0.011192369274795055,
0.037703875452280045,
0.025192247703671455,
0.024695731699466705,
-0.008829130791127682,
-0.004786650650203228,
-0.050645049661397934,
0.05189144238829613,
0.010130190290510654,
-0.006833007093518972,
-0.002661163453012705,... |
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2020-04-29T13:48:22Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sn-sv
* source languages: sn
* target languages: sv
* OPUS readme: [sn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sn.sv | 35.6 | 0.536 |
| [
-0.01965818740427494,
-0.02858627215027809,
-0.002351032104343176,
0.040695589035749435,
0.027561519294977188,
0.023395096883177757,
-0.0020725983195006847,
0.00035668289638124406,
-0.05180903896689415,
0.05381147190928459,
0.01028973888605833,
-0.008175088092684746,
0.012552506290376186,
... |
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2020-04-29T13:48:40Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sq-en
* source languages: sq
* target languages: en
* OPUS readme: [sq-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.sq.en | 58.4 | 0.732 |
| [
-0.0123189901933074,
-0.032945338636636734,
0.0034963444340974092,
0.043169036507606506,
0.03391813859343529,
0.016420554369688034,
-0.0030064054299145937,
0.0017531472258269787,
-0.05040416121482849,
0.04382745921611786,
-0.004960019141435623,
-0.014375568367540836,
0.009051389992237091,
... |
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | 2020-04-29T13:48:58Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sq-es
* source languages: sq
* target languages: es
* OPUS readme: [sq-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| GlobalVoices.sq.es | 23.9 | 0.510 |
| [
-0.011864805594086647,
-0.029092662036418915,
0.00047838463797234,
0.04478392004966736,
0.03472163528203964,
0.015415273606777191,
-0.00310715870000422,
0.0035180733539164066,
-0.04775425046682358,
0.047703780233860016,
-0.007978670299053192,
-0.015946077182888985,
0.014843763783574104,
0.... |
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2020-04-29T13:49:13Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sq-sv
* source languages: sq
* target languages: sv
* OPUS readme: [sq-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sq-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sq-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sq.sv | 36.2 | 0.559 |
| [
-0.0136776277795434,
-0.029130108654499054,
0.004272020887583494,
0.04329564422369003,
0.03262180835008621,
0.014956978149712086,
-0.00047442628419958055,
0.0007559327059425414,
-0.049747299402952194,
0.04665979743003845,
-0.0010891035199165344,
-0.013475832529366016,
0.01512855477631092,
... |
albert-xxlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,091 | 2020-04-29T13:49:26Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-srn-en
* source languages: srn
* target languages: en
* OPUS readme: [srn-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-en/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.en | 40.3 | 0.555 |
| [
-0.01678062044084072,
-0.02817518636584282,
-0.00831504538655281,
0.039029307663440704,
0.02640351839363575,
0.024991780519485474,
-0.0013055222807452083,
-0.003804973093792796,
-0.046280574053525925,
0.05312807112932205,
0.01500481367111206,
-0.01460307464003563,
0.006808919366449118,
0.0... |
albert-xxlarge-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42,640 | 2020-04-29T13:49:41Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-srn-es
* source languages: srn
* target languages: es
* OPUS readme: [srn-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.es | 30.4 | 0.481 |
| [
-0.01607321761548519,
-0.030935361981391907,
-0.005938918795436621,
0.039795223623514175,
0.027675317600369453,
0.024414317682385445,
-0.0038594743236899376,
-0.0030547224450856447,
-0.04406289383769035,
0.05169985070824623,
0.009667256847023964,
-0.013435475528240204,
0.004993862006813288,
... |
bert-base-cased-finetuned-mrpc | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11,644 | 2020-04-29T13:49:55Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-srn-fr
* source languages: srn
* target languages: fr
* OPUS readme: [srn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.fr | 28.9 | 0.462 |
| [
-0.0156712606549263,
-0.03096758760511875,
-0.012708934023976326,
0.037348825484514236,
0.02529464103281498,
0.02330772951245308,
-0.006021237466484308,
-0.00763948867097497,
-0.04595862701535225,
0.051659539341926575,
0.011916136369109154,
-0.010820581577718258,
-0.0007165659917518497,
0.... |
bert-base-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,621,271 | 2020-04-29T13:50:10Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-srn-sv
* source languages: srn
* target languages: sv
* OPUS readme: [srn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/srn-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/srn-sv/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.srn.sv | 32.2 | 0.500 |
| [
-0.016952235251665115,
-0.02961435541510582,
-0.005281987600028515,
0.041033487766981125,
0.028488440439105034,
0.02094542607665062,
-0.0016228151507675648,
-0.0040933964774012566,
-0.04729820042848587,
0.055497728288173676,
0.014152743853628635,
-0.014031345956027508,
0.009527132846415043,
... |
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2020-04-29T13:50:24Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ss-en
* source languages: ss
* target languages: en
* OPUS readme: [ss-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ss-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ss-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ss.en | 30.9 | 0.478 |
| [
-0.020026404410600662,
-0.02733660489320755,
-0.010955766774713993,
0.0412980280816555,
0.02768012322485447,
0.027362743392586708,
-0.0028349673375487328,
0.0012133106356486678,
-0.05290226265788078,
0.0566101111471653,
0.012437286786735058,
-0.012065742164850235,
0.006458846852183342,
0.0... |
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2020-04-29T13:50:39Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: [ssp-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ssp-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ssp-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ssp.es | 89.7 | 0.930 |
| [
-0.01423339918255806,
-0.030141785740852356,
-0.006688883062452078,
0.03821958228945732,
0.0301545187830925,
0.022878287360072136,
-0.0005359693896025419,
0.0026032801251858473,
-0.050247929990291595,
0.056084342300891876,
0.007125211879611015,
-0.013940834440290928,
0.005498244892805815,
... |
bert-base-german-dbmdz-cased | [
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,814 | 2020-04-29T13:50:57Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-en
* source languages: st
* target languages: en
* OPUS readme: [st-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.en | 45.7 | 0.609 |
| [
-0.01418577041476965,
-0.028001317754387856,
-0.009834697470068932,
0.038807354867458344,
0.029773162677884102,
0.026385625824332237,
0.00014269101666286588,
-0.002843619091436267,
-0.051135577261447906,
0.056341007351875305,
0.011117905378341675,
-0.01403027679771185,
0.006604624446481466,
... |
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2020-04-29T13:51:13Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-es
* source languages: st
* target languages: es
* OPUS readme: [st-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.es | 31.3 | 0.499 |
| [
-0.011348684318363667,
-0.028394561260938644,
-0.007157316897064447,
0.03741636872291565,
0.030699780210852623,
0.025641530752182007,
-0.0023479911033064127,
-0.0009783144341781735,
-0.05005502700805664,
0.05543157458305359,
0.006216011010110378,
-0.014665666967630386,
0.0061501748859882355,... |
bert-base-multilingual-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
... | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,749,504 | 2020-04-29T13:51:34Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-fi
* source languages: st
* target languages: fi
* OPUS readme: [st-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fi/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.fi | 28.8 | 0.520 |
| [
-0.015164615586400032,
-0.03371502831578255,
-0.0013795499689877033,
0.03817916288971901,
0.025787511840462685,
0.028309712186455727,
0.0015914889518171549,
-0.004141359589993954,
-0.055821117013692856,
0.05089295282959938,
0.013103986158967018,
-0.01237387303262949,
0.00932731106877327,
0... |
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
... | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | 2020-04-29T13:51:49Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-st-fr
* source languages: st
* target languages: fr
* OPUS readme: [st-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.st.fr | 30.7 | 0.490 |
| [
-0.012104650028049946,
-0.029841912910342216,
-0.013181304559111595,
0.036750685423612595,
0.02848145365715027,
0.02428114414215088,
-0.0052024247124791145,
-0.00672726659104228,
-0.04832877218723297,
0.054018132388591766,
0.010013185441493988,
-0.011667458340525627,
-0.0007053219014778733,
... |
bert-large-cased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,214 | 2020-05-12T21:40:41Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-NORWAY
* source languages: sv
* target languages: nb_NO,nb,nn_NO,nn,nog,no_nb,no
* OPUS readme: [sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-nb_NO+nb+nn_NO+nn+nog+no_nb+no/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.no | 39.3 | 0.590 |
| [
-0.012416972778737545,
-0.023729978129267693,
0.001046065939590335,
0.04102308675646782,
0.03290456533432007,
0.011741122230887413,
-0.009010376408696175,
-0.010902064852416515,
-0.050083454698324203,
0.06332484632730484,
0.008010968565940857,
-0.015318368561565876,
0.005319069605320692,
0... |
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2020-05-12T21:40:32Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ZH
* source languages: sv
* target languages: cmn,cn,yue,ze_zh,zh_cn,zh_CN,zh_HK,zh_tw,zh_TW,zh_yue,zhs,zht,zh
* OPUS readme: [sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-cmn+cn+yue+ze_zh+zh_cn+zh_CN+zh_HK+zh_tw+zh_TW+zh_yue+zhs+zht+zh/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| bible-uedin.sv.zh | 24.2 | 0.342 |
| [
-0.020853424444794655,
-0.03225753456354141,
-0.0009497338905930519,
0.049584999680519104,
0.021560141816735268,
0.017497222870588303,
-0.0008081396226771176,
-0.009995290078222752,
-0.04078751802444458,
0.053128067404031754,
0.00943820271641016,
-0.00923441257327795,
0.016543736681342125,
... |
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | 2020-04-29T13:52:29Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-af
* source languages: sv
* target languages: af
* OPUS readme: [sv-af](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-af/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-af/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.af | 44.4 | 0.623 |
| [
-0.01646343059837818,
-0.032250016927719116,
-0.001705040456727147,
0.040604546666145325,
0.028176208958029747,
0.022241581231355667,
0.0016488838009536266,
-0.0018168243113905191,
-0.05021768808364868,
0.05401906371116638,
0.009408470243215561,
-0.007126869633793831,
0.012568837963044643,
... |
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2020-04-29T13:52:41Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ase
* source languages: sv
* target languages: ase
* OPUS readme: [sv-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ase/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ase/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ase | 40.5 | 0.572 |
| [
-0.013531362637877464,
-0.03155975416302681,
0.007620497141033411,
0.031283918768167496,
0.038196664303541183,
0.013799442909657955,
0.0029963082633912563,
0.00021108833607286215,
-0.05424084514379501,
0.053743135184049606,
0.004925841465592384,
-0.004870586097240448,
0.01055233832448721,
... |
bert-large-uncased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76,685 | 2020-04-29T13:53:03Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-bcl
* source languages: sv
* target languages: bcl
* OPUS readme: [sv-bcl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bcl/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bcl/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bcl | 39.5 | 0.607 |
| [
-0.016759686172008514,
-0.02087964490056038,
0.0005199289880692959,
0.04109744727611542,
0.024054955691099167,
0.02088460512459278,
0.006316446699202061,
0.0031911979895085096,
-0.04339348524808884,
0.049122776836156845,
0.010579443536698818,
-0.016768217086791992,
0.011608085595071316,
0.... |
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | 2020-04-29T13:53:28Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-bem
* source languages: sv
* target languages: bem
* OPUS readme: [sv-bem](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bem/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bem/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bem/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bem/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bem | 22.3 | 0.473 |
| [
-0.018502937629818916,
-0.032558634877204895,
0.0023784381337463856,
0.04048726707696915,
0.030712643638253212,
0.02450881339609623,
0.005873582791537046,
0.0006199518102221191,
-0.0434538833796978,
0.05667421594262123,
0.006610406097024679,
-0.015457520261406898,
0.010597657412290573,
0.0... |
ctrl | [
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
] | null | {
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17,007 | 2020-04-29T13:54:02Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-bi
* source languages: sv
* target languages: bi
* OPUS readme: [sv-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bi/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bi/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bi/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bi/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bi | 30.8 | 0.496 |
| [
-0.019490348175168037,
-0.025495529174804688,
0.003333507338538766,
0.03956287354230881,
0.02155712991952896,
0.02218366228044033,
0.0031248251907527447,
0.0005159344291314483,
-0.044382959604263306,
0.050417233258485794,
0.006737901829183102,
-0.0176641047000885,
0.015765083953738213,
0.0... |
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2020-04-29T13:54:29Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-bzs
* source languages: sv
* target languages: bzs
* OPUS readme: [sv-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-bzs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-bzs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.bzs | 29.4 | 0.484 |
| [
-0.016843870282173157,
-0.030883632600307465,
-0.003053459106013179,
0.04123523458838463,
0.02472548745572567,
0.023578522726893425,
0.002119176322594285,
0.0017430539010092616,
-0.04436877742409706,
0.052066851407289505,
0.014401664026081562,
-0.014996137470006943,
0.01286276988685131,
0.... |
distilbert-base-cased | [
"pytorch",
"tf",
"onnx",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0",
"has_space"
] | null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 574,859 | 2020-04-29T13:54:41Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ceb
* source languages: sv
* target languages: ceb
* OPUS readme: [sv-ceb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ceb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ceb/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ceb/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ceb/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ceb | 39.2 | 0.609 |
| [
-0.014031704515218735,
-0.025438033044338226,
-0.004184320569038391,
0.0348963625729084,
0.03355003520846367,
0.020555684342980385,
-0.0024367200676351786,
-0.001073420513421297,
-0.05089770257472992,
0.054659634828567505,
0.008214105851948261,
-0.011920290999114513,
0.010117584839463234,
... |
distilbert-base-multilingual-cased | [
"pytorch",
"tf",
"onnx",
"safetensors",
"distilbert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
... | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,339,633 | 2020-04-29T13:55:29Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-crs
* source languages: sv
* target languages: crs
* OPUS readme: [sv-crs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-crs/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-crs/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.crs | 32.4 | 0.512 |
| [
-0.0169678945094347,
-0.02684083953499794,
-0.00031479718745686114,
0.03856533765792847,
0.0310565996915102,
0.017772413790225983,
-0.00047410899423994124,
-0.00009213864541379735,
-0.04308989271521568,
0.05452718585729599,
0.012138043530285358,
-0.013340798206627369,
0.013370456174015999,
... |
distilbert-base-uncased-finetuned-sst-2-english | [
"pytorch",
"tf",
"rust",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"arxiv:1910.01108",
"doi:10.57967/hf/0181",
"transformers",
"license:apache-2.0",
"model-index",
"has_space"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,060,704 | 2020-04-29T13:56:00Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ee
* source languages: sv
* target languages: ee
* OPUS readme: [sv-ee](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ee/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ee/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ee/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ee/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ee | 29.7 | 0.508 |
| [
-0.015157999470829964,
-0.02489834651350975,
0.0022204043343663216,
0.035059016197919846,
0.02986333705484867,
0.02010691724717617,
0.001512163900770247,
-0.00166224199347198,
-0.048512864857912064,
0.05529370903968811,
0.009024754166603088,
-0.01547002512961626,
0.014694945886731148,
0.04... |
gpt2 | [
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"en",
"doi:10.57967/hf/0039",
"transformers",
"exbert",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 21,488,226 | 2020-04-29T13:57:44Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-fi
* source languages: sv
* target languages: fi
* OPUS readme: [sv-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-fi/README.md)
* dataset: opus+bt
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus+bt-2020-04-07.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.zip)
* test set translations: [opus+bt-2020-04-07.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.test.txt)
* test set scores: [opus+bt-2020-04-07.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fi/opus+bt-2020-04-07.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| fiskmo_testset.sv.fi | 26.9 | 0.623 |
| Tatoeba.sv.fi | 45.2 | 0.678 |
| [
-0.013019328936934471,
-0.038230400532484055,
0.010015035979449749,
0.028826752677559853,
0.02730090171098709,
0.020042985677719116,
0.00022193504264578223,
-0.0015034266980364919,
-0.05555722117424011,
0.05084419250488281,
0.009288660250604153,
-0.011088148690760136,
0.014941670931875706,
... |
openai-gpt | [
"pytorch",
"tf",
"rust",
"safetensors",
"openai-gpt",
"text-generation",
"en",
"arxiv:1705.11168",
"arxiv:1803.02324",
"arxiv:1910.09700",
"transformers",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"OpenAIGPTLMHeadModel"
],
"model_type": "openai-gpt",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 65,432 | 2020-04-29T13:57:57Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-fj
* source languages: sv
* target languages: fj
* OPUS readme: [sv-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-fj/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-21.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.zip)
* test set translations: [opus-2020-01-21.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.test.txt)
* test set scores: [opus-2020-01-21.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-fj/opus-2020-01-21.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.fj | 27.8 | 0.504 |
| [
-0.017206434160470963,
-0.02878635562956333,
0.0022997797932475805,
0.03934313729405403,
0.026329975575208664,
0.02156422846019268,
0.0023314030840992928,
-0.007573774550110102,
-0.04792700335383415,
0.04850076884031296,
0.01079613994807005,
-0.015483962371945381,
0.008812790736556053,
0.0... |
xlnet-large-cased | [
"pytorch",
"tf",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"transformers",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"XLNetLMHeadModel"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16,389 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-mos
* source languages: sv
* target languages: mos
* OPUS readme: [sv-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mos/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-mos/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.mos | 22.4 | 0.379 |
| [
-0.01548239216208458,
-0.02967357076704502,
-0.0015987640945240855,
0.03629230335354805,
0.02924896776676178,
0.020099928602576256,
0.0028563477098941803,
-0.00019551905279513448,
-0.04716816172003746,
0.05768042802810669,
0.013531406410038471,
-0.015843374654650688,
0.016527920961380005,
... |
123abhiALFLKFO/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
] | text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 57 | 2020-04-29T14:10:41Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-sv-ro
* source languages: sv
* target languages: ro
* OPUS readme: [sv-ro](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-ro/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-ro/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.sv.ro | 29.5 | 0.510 |
| [
-0.0174572691321373,
-0.029075289145112038,
0.0014868362341076136,
0.03713125362992287,
0.034524038434028625,
0.01845996454358101,
0.0010102756787091494,
-0.002952408976852894,
-0.04965870454907417,
0.053230103105306625,
0.012436455115675926,
-0.01978135295212269,
0.008341281674802303,
0.0... |
AAli/distilbert-base-uncased-finetuned-cola | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- tl
- de
tags:
- translation
license: apache-2.0
---
### tgl-deu
* source group: Tagalog
* target group: German
* OPUS readme: [tgl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md)
* model: transformer-align
* source language(s): tgl_Latn
* target language(s): deu
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.tgl.deu | 22.7 | 0.473 |
### System Info:
- hf_name: tgl-deu
- source_languages: tgl
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tgl-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['tl', 'de']
- src_constituents: {'tgl_Latn'}
- tgt_constituents: {'deu'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tgl-deu/opus-2020-06-17.test.txt
- src_alpha3: tgl
- tgt_alpha3: deu
- short_pair: tl-de
- chrF2_score: 0.473
- bleu: 22.7
- brevity_penalty: 0.9690000000000001
- ref_len: 2453.0
- src_name: Tagalog
- tgt_name: German
- train_date: 2020-06-17
- src_alpha2: tl
- tgt_alpha2: de
- prefer_old: False
- long_pair: tgl-deu
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | [
-0.0015514105325564742,
-0.03602760285139084,
-0.015437895432114601,
0.02678624354302883,
0.03947354853153229,
0.03557795658707619,
-0.006129395682364702,
0.005452553275972605,
-0.06680617481470108,
0.06118427589535713,
0.007607715670019388,
-0.021979741752147675,
-0.011899083852767944,
0.... |
AG/pretraining | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2020-05-06T03:08:09Z | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tn-fr
* source languages: tn
* target languages: fr
* OPUS readme: [tn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tn.fr | 29.0 | 0.474 |
| [
-0.013260521925985813,
-0.029111793264746666,
-0.010353215038776398,
0.03805263713002205,
0.026881830766797066,
0.025307554751634598,
-0.0050832731649279594,
-0.007623458281159401,
-0.04599523916840553,
0.05041661113500595,
0.010100901126861572,
-0.010972761549055576,
-0.002240457572042942,
... |
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | [
"pytorch",
"tensorboard",
"bert",
"text-classification",
"transformers",
"generated_from_trainer"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-tzo-es
* source languages: tzo
* target languages: es
* OPUS readme: [tzo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tzo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tzo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.tzo.es | 20.8 | 0.381 |
| [
-0.016300469636917114,
-0.032370567321777344,
-0.0016005240613594651,
0.03322167694568634,
0.027290794998407364,
0.024686550721526146,
-0.0006703579565510154,
-0.0017036153003573418,
-0.042694151401519775,
0.051986418664455414,
0.012306762859225273,
-0.020037641748785973,
0.00823224149644374... |
AbidineVall/my-new-shiny-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-es
* source languages: yo
* target languages: es
* OPUS readme: [yo-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-es/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.es | 22.0 | 0.393 |
| [
-0.0191014613956213,
-0.027392394840717316,
0.0010858223540708423,
0.03523239120841026,
0.032043855637311935,
0.02553832344710827,
-0.00038826483068987727,
-0.003994188737124205,
-0.03900210186839104,
0.05080665275454521,
0.002841126173734665,
-0.016726328060030937,
0.00805678591132164,
0.... |
Abirate/code_net_new_tokenizer_from_WPiece_bert_algorithm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- translation
license: apache-2.0
---
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/yo-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.yo.fr | 24.1 | 0.408 |
| [
-0.014705764129757881,
-0.030530540272593498,
-0.009156951680779457,
0.03489050269126892,
0.026853227987885475,
0.023904751986265182,
-0.005457064602524042,
-0.008594346232712269,
-0.042551685124635696,
0.05115272477269173,
0.010093224234879017,
-0.0121088158339262,
-0.00006340946129057556,
... |
AdapterHub/bert-base-uncased-pf-mit_movie_trivia | [
"bert",
"en",
"arxiv:2104.08247",
"adapter-transformers",
"token-classification",
"adapterhub:ner/mit_movie_trivia"
] | token-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
language:
- it
- he
tags:
- translation
license: apache-2.0
---
### it-he
* source group: Italian
* target group: Hebrew
* OPUS readme: [ita-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md)
* model: transformer
* source language(s): ita
* target language(s): heb
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-12-10.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip)
* test set translations: [opus-2020-12-10.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt)
* test set scores: [opus-2020-12-10.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.ita.heb | 38.5 | 0.593 |
### System Info:
- hf_name: it-he
- source_languages: ita
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['it', 'he']
- src_constituents: ('Italian', {'ita'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: ita-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ita-heb/opus-2020-12-10.test.txt
- src_alpha3: ita
- tgt_alpha3: heb
- chrF2_score: 0.593
- bleu: 38.5
- brevity_penalty: 0.985
- ref_len: 9796.0
- src_name: Italian
- tgt_name: Hebrew
- train_date: 2020-12-10 00:00:00
- src_alpha2: it
- tgt_alpha2: he
- prefer_old: False
- short_pair: it-he
- helsinki_git_sha: b317f78a3ec8a556a481b6a53dc70dc11769ca96
- transformers_git_sha: 1310e1a758edc8e89ec363db76863c771fbeb1de
- port_machine: LM0-400-22516.local
- port_time: 2020-12-11-16:02 | [
-0.0025754282251000404,
-0.03077925369143486,
-0.01327608898282051,
0.02763237990438938,
0.03391607105731964,
0.016822785139083862,
0.005319060757756233,
-0.0020335433073341846,
-0.06895366311073303,
0.05721491575241089,
0.0028573847375810146,
-0.015574858523905277,
-0.00004547301432467066,
... |
AdapterHub/roberta-base-pf-boolq | [
"roberta",
"en",
"dataset:boolq",
"arxiv:2104.08247",
"adapter-transformers",
"text-classification",
"adapterhub:qa/boolq"
] | text-classification | {
"architectures": null,
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ## ParsBERT: Transformer-based Model for Persian Language Understanding
ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base.
Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515)
All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned)
---
## Introduction
This model is pre-trained on a large Persian corpus with various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 2M documents. A large subset of this corpus was crawled manually.
As a part of ParsBERT methodology, an extensive pre-processing combining POS tagging and WordPiece segmentation was carried out to bring the corpus into a proper format. This process produces more than 40M true sentences.
## Evaluation
ParsBERT is evaluated on three NLP downstream tasks: Sentiment Analysis (SA), Text Classification, and Named Entity Recognition (NER). For this matter and due to insufficient resources, two large datasets for SA and two for text classification were manually composed, which are available for public use and benchmarking. ParsBERT outperformed all other language models, including multilingual BERT and other hybrid deep learning models for all tasks, improving the state-of-the-art performance in Persian language modeling.
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
### Sentiment Analysis (SA) task
| Dataset | ParsBERT | mBERT | DeepSentiPers |
|:--------------------------:|:---------:|:-----:|:-------------:|
| Digikala User Comments | 81.74* | 80.74 | - |
| SnappFood User Comments | 88.12* | 87.87 | - |
| SentiPers (Multi Class) | 71.11* | - | 69.33 |
| SentiPers (Binary Class) | 92.13* | - | 91.98 |
### Text Classification (TC) task
| Dataset | ParsBERT | mBERT |
|:-----------------:|:--------:|:-----:|
| Digikala Magazine | 93.59* | 90.72 |
| Persian News | 97.19* | 95.79 |
### Named Entity Recognition (NER) task
| Dataset | ParsBERT | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF |
|:-------:|:--------:|:--------:|:----------:|:--------------:|:----------:|:----------------:|:------------:|
| PEYMA | 93.10* | 86.64 | - | 90.59 | - | 84.00 | - |
| ARMAN | 98.79* | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 |
**If you tested ParsBERT on a public dataset and you want to add your results to the table above, open a pull request or contact us. Also make sure to have your code available online so we can add it as a reference**
## How to use
### TensorFlow 2.0
```python
from transformers import AutoConfig, AutoTokenizer, TFAutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
text = "ما در هوشواره معتقدیم با انتقال صحیح دانش و آگاهی، همه افراد میتوانند از ابزارهای هوشمند استفاده کنند. شعار ما هوش مصنوعی برای همه است."
tokenizer.tokenize(text)
>>> ['ما', 'در', 'هوش', '##واره', 'معتقدیم', 'با', 'انتقال', 'صحیح', 'دانش', 'و', 'اگاهی', '،', 'همه', 'افراد', 'میتوانند', 'از', 'ابزارهای', 'هوشمند', 'استفاده', 'کنند', '.', 'شعار', 'ما', 'هوش', 'مصنوعی', 'برای', 'همه', 'است', '.']
```
### Pytorch
```python
from transformers import AutoConfig, AutoTokenizer, AutoModel
config = AutoConfig.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
tokenizer = AutoTokenizer.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
model = AutoModel.from_pretrained("HooshvareLab/bert-base-parsbert-uncased")
```
## NLP Tasks Tutorial
Coming soon stay tuned
## Cite
Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research:
```markdown
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Acknowledgments
We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources.
## Contributors
- Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi)
- Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam)
- Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi)
- Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri)
- Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/)
## Releases
### Release v0.1 (May 27, 2019)
This is the first version of our ParsBERT based on BERT<sub>BASE</sub>
| [
-0.0003529514651745558,
-0.027824288234114647,
-0.025046609342098236,
0.06884883344173431,
0.017561912536621094,
0.042692169547080994,
-0.013242911547422409,
-0.011910276487469673,
-0.04365214332938194,
0.06518179178237915,
0.04749257490038872,
-0.024192797020077705,
0.002484452910721302,
... |
Akash7897/fill_mask_model | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- zh
inference:
parameters:
max_new_tokens: 128
do_sample: True
license: apache-2.0
---
# Wenzhong-GPT2-3.5B
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLG任务,目前最大的,中文版的GPT2
Focused on handling NLG tasks, the current largest, Chinese GPT2.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言生成 NLG| 闻仲 Wenzhong | GPT2 | 3.5B | 中文 Chinese |
## 模型信息 Model Information
为了可以获得一个强大的单向语言模型,我们采用GPT模型结构,并且应用于中文语料上。具体地,这个模型拥有30层解码器和35亿参数,这比原本的GPT2-XL还要大。我们在100G的中文语料上预训练,这消耗了32个NVIDIA A100显卡大约28小时。据我们所知,它是目前最大的中文的GPT模型。
To obtain a robust unidirectional language model, we adopt the GPT model structure and apply it to the Chinese corpus. Specifically, this model has 30 decoder layers and 3.5 billion parameters, which is larger than the original GPT2-XL. We pre-train it on 100G of Chinese corpus, which consumes 32 NVIDIA A100 GPUs for about 28 hours. To the best of our knowledge, it is the largest Chinese GPT model currently available.
## 使用 Usage
### 加载模型 Loading Models
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Wenzhong-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### 使用示例 Usage Examples
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Wenzhong-GPT2-3.5B')
generator("北京位于", max_length=30, num_return_sequences=1)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | [
-0.02869064174592495,
-0.03842319920659065,
-0.012968935072422028,
0.0431099496781826,
0.0467127300798893,
0.0257241353392601,
-0.0013861616607755423,
-0.029184909537434578,
-0.010931779630482197,
0.04145671799778938,
0.02963779680430889,
0.023845866322517395,
0.009576385840773582,
0.05227... |
Akash7897/gpt2-wikitext2 | [
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- en
inference: false
license: apache-2.0
---
# Yuyuan-GPT2-3.5B
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
目前最大的,医疗领域的生成语言模型GPT2。
The currently largest, generative language model GPT2 in the medical domain.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 领域 Domain | 余元 Yuyuan | GPT2 | 3.5B | - |
## 模型信息 Model Information
我们采用与Wenzhong-GPT2-3.5B相同的架构,在50GB的医学(PubMed)语料库上进行预训练。我们使用了32个NVIDIA A100显卡大约7天。我们的Yuyuan-GPT2-3.5B是医疗领域最大的开源的GPT2模型。进一步地,模型可以通过计算困惑度(PPL)来判断事实。为了完成问答功能,我们将短语模式从疑问的形式转换为了陈述句。
We adopt the same architecture as Wenzhong-GPT2-3.5B to be pre-trained on 50 GB medical (PubMed) corpus. We use 32 NVIDIA A100 GPUs for about 7 days. Our Yuyuan-GPT2-3.5B is the largest open-source GPT2 model in the medical domain. We further allow the model to judge facts by computing perplexity (PPL). To accomplish question-and-answer functionality, we transform the phrase pattern from interrogative to declarative.
## 使用 Usage
### 加载模型 Loading Models
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
model = GPT2Model.from_pretrained('IDEA-CCNL/Yuyuan-GPT2-3.5B')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### 使用示例 Usage Examples
```python
from transformers import pipeline, set_seed
set_seed(55)
generator = pipeline('text-generation', model='IDEA-CCNL/Yuyuan-GPT2-3.5B')
generator("Diabetics should not eat", max_length=30, num_return_sequences=1)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| [
-0.018452588468790054,
-0.03708486258983612,
-0.010172748006880283,
0.055608976632356644,
0.040516216307878494,
0.02838461473584175,
0.010171554051339626,
-0.024289660155773163,
0.009912418201565742,
0.025750627741217613,
0.026493532583117485,
0.009784134104847908,
0.020467333495616913,
0.... |
Akash7897/my-newtokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Zhouwenwang-Unified-1.3B
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
与追一科技合作探索的中文统一模型,13亿参数的编码器结构模型。
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 1.3B parameters.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 探索 Exploration | 周文王 Zhouwenwang | 待定 TBD | 1.3B | 中文 Chinese |
## 模型信息 Model Information
IDEA研究院认知计算中心联合追一科技有限公司提出的具有新结构的大模型。该模型在预训练阶段时考虑统一LM和MLM的任务,这让其同时具备生成和理解的能力,并且增加了旋转位置编码技术。目前已有13亿参数的Zhouwenwang-Unified-1.3B大模型,是中文领域中可以同时做LM和MLM任务的最大的模型。我们后续会持续在模型规模、知识融入、监督辅助任务等方向不断优化。
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. At present, Zhouwenwang-Unified-1.3B with 13B parameters is the largest Chinese model that can do both LM and MLM tasks. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
### 下游任务 Performance
下游中文任务的得分(没有做任何数据增强)。
Scores on downstream chinese tasks (without any data augmentation)
| 模型 Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.7770 | 0.8140 | 0.8914 | 0.8600 |
| Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有 Zhouwenwang-Unified-1.3B相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of Zhouwenwang-Unified-1.3B in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### 加载模型 Loading Models
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
```
### 使用示例 Usage Examples
你可以使用该模型进行续写任务。
You can use the model for continuation writing tasks.
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = '清华大学位于'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == '。':
break
print(sentence)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | [
-0.028650645166635513,
-0.021756663918495178,
-0.00395576935261488,
0.041532356292009354,
0.052946437150239944,
0.0168501827865839,
-0.010976526886224747,
-0.026882914826273918,
0.002263447502627969,
0.06505380570888519,
0.02466111071407795,
0.013537776656448841,
0.007110432721674442,
0.05... |
Akash7897/test-clm | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- zh
license: apache-2.0
widget:
- text: "生活的真谛是[MASK]。"
---
# Zhouwenwang-Unified-110M
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
与追一科技合作探索的中文统一模型,1.1亿参数的编码器结构模型。
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 110M parameters.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 探索 Exploration | 周文王 Zhouwenwang | 待定 TBD | 110M | 中文 Chinese |
## 模型信息 Model Information
IDEA研究院认知计算中心联合追一科技有限公司提出的具有新结构的大模型。该模型在预训练阶段时考虑统一LM和MLM的任务,这让其同时具备生成和理解的能力,并且增加了旋转位置编码技术。我们后续会持续在模型规模、知识融入、监督辅助任务等方向不断优化。
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有 Zhouwenwang-Unified-110M相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of Zhouwenwang-Unified-110M in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-110M and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### 加载模型 Loading Models
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
```
### 使用示例 Usage Examples
你可以使用该模型进行续写任务。
You can use the model for continuation writing tasks.
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = '清华大学位于'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-110M")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == '。':
break
print(sentence)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
| [
-0.03523636236786842,
-0.014730755239725113,
0.002511782106012106,
0.037018563598394394,
0.049375031143426895,
0.021216923370957375,
-0.009851817041635513,
-0.022707263007760048,
0.0039036874659359455,
0.06837373971939087,
0.022207781672477722,
0.008567810989916325,
0.007523709908127785,
0... |
Akashamba/distilbert-base-uncased-finetuned-ner | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
# Rick And Morty DialoGPT Model | [
-0.028104610741138458,
0.02264019101858139,
-0.00687492685392499,
0.030190417543053627,
0.012071076780557632,
0.012977051548659801,
0.007178216241300106,
0.013848679140210152,
-0.0022445700597018003,
0.01828915625810623,
0.05959046632051468,
-0.026109836995601654,
0.01368674449622631,
0.04... |
Akashpb13/Galician_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
widget:
- text: "My name is Mark and I live in London. I am a postgraduate student at Queen Mary University."
language:
- en
license: mit
---
# Hate Speech Classifier for Social Media Content in English Language
A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original English BERT base tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | [
-0.0026208788622170687,
0.009780794382095337,
-0.0015002954751253128,
0.04992349073290825,
0.04003685712814331,
0.05840136483311653,
-0.023037653416395187,
-0.002100074663758278,
-0.033831410109996796,
0.054478999227285385,
0.040941059589385986,
-0.004225136712193489,
0.01917392760515213,
... |
Akashpb13/Hausa_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ha",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"license:apache-2.0",
"model-index",
"... | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
widget:
- text: "Ciao, mi chiamo Marcantonio, sono di Roma. Studio informatica all'Università di Roma."
language:
- it
license: mit
---
# Hate Speech Classifier for Social Media Content in Italian Language
A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model.
## Tokenizer
During training the text was preprocessed using the original Italian ALBERTO tokenizer. We suggest the same tokenizer is used for inference.
## Model output
The model classifies each input into one of four distinct classes:
* 0 - acceptable
* 1 - inappropriate
* 2 - offensive
* 3 - violent | [
-0.009822645224630833,
-0.010803237557411194,
0.0036340889055281878,
0.04616278037428856,
0.046047575771808624,
0.03663560003042221,
-0.01012216042727232,
-0.0009890650399029255,
-0.033176273107528687,
0.0555572509765625,
0.04579516127705574,
-0.009143206290900707,
0.002608823124319315,
0.... |
AkshaySg/gramCorrection | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 4 | null | ---
datasets:
- squad_v2
- wiki_qa
language:
- en
metrics:
- accuracy
pipeline_tag: question-answering
---
A distilbert model fine-tuned for question answering. | [
0.007829503156244755,
-0.008843069896101952,
-0.0036526438780128956,
0.03804255276918411,
0.05831139534711838,
-0.0018489186186343431,
-0.008398025296628475,
0.018746187910437584,
-0.04732096940279007,
0.03760438412427902,
0.052303191274404526,
0.015803705900907516,
0.011370775289833546,
0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.