Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-en-gmw Helsinki-NLP/opus-mt-en-gmw
11 downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-gmw") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-gmw")
Uploaded in S3

eng-gmw

  • source group: English

  • target group: West Germanic languages

  • OPUS readme: eng-gmw

  • model: transformer

  • source language(s): eng

  • target language(s): afr ang_Latn deu enm_Latn frr fry gos gsw ksh ltz nds nld pdc sco stq swg yid

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)

  • download original weights: opus2m-2020-08-01.zip

  • test set translations: opus2m-2020-08-01.test.txt

  • test set scores: opus2m-2020-08-01.eval.txt

Benchmarks

testset BLEU chr-F
newssyscomb2009-engdeu.eng.deu 21.4 0.518
news-test2008-engdeu.eng.deu 21.0 0.510
newstest2009-engdeu.eng.deu 20.4 0.513
newstest2010-engdeu.eng.deu 22.9 0.528
newstest2011-engdeu.eng.deu 20.5 0.508
newstest2012-engdeu.eng.deu 21.0 0.507
newstest2013-engdeu.eng.deu 24.7 0.533
newstest2015-ende-engdeu.eng.deu 28.2 0.568
newstest2016-ende-engdeu.eng.deu 33.3 0.605
newstest2017-ende-engdeu.eng.deu 26.5 0.559
newstest2018-ende-engdeu.eng.deu 39.9 0.649
newstest2019-ende-engdeu.eng.deu 35.9 0.616
Tatoeba-test.eng-afr.eng.afr 55.7 0.740
Tatoeba-test.eng-ang.eng.ang 6.5 0.164
Tatoeba-test.eng-deu.eng.deu 40.4 0.614
Tatoeba-test.eng-enm.eng.enm 2.3 0.254
Tatoeba-test.eng-frr.eng.frr 8.4 0.248
Tatoeba-test.eng-fry.eng.fry 17.9 0.424
Tatoeba-test.eng-gos.eng.gos 2.2 0.309
Tatoeba-test.eng-gsw.eng.gsw 1.6 0.186
Tatoeba-test.eng-ksh.eng.ksh 1.5 0.189
Tatoeba-test.eng-ltz.eng.ltz 20.2 0.383
Tatoeba-test.eng.multi 41.6 0.609
Tatoeba-test.eng-nds.eng.nds 18.9 0.437
Tatoeba-test.eng-nld.eng.nld 53.1 0.699
Tatoeba-test.eng-pdc.eng.pdc 7.7 0.262
Tatoeba-test.eng-sco.eng.sco 37.7 0.557
Tatoeba-test.eng-stq.eng.stq 5.9 0.380
Tatoeba-test.eng-swg.eng.swg 6.2 0.236
Tatoeba-test.eng-yid.eng.yid 6.8 0.296

System Info: