Back to all models
Model card Files and versions Use in transformers
translation mask_token:
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member Β· 1332 models


  • source group: English

  • target group: North Germanic languages

  • OPUS readme: eng-gmq

  • model: transformer

  • source language(s): eng

  • target language(s): dan fao isl nno nob nob_Hebr non_Latn swe

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)

  • download original weights:

  • test set translations: opus2m-2020-08-01.test.txt

  • test set scores: opus2m-2020-08-01.eval.txt


testset BLEU chr-F
Tatoeba-test.eng-dan.eng.dan 57.7 0.724
Tatoeba-test.eng-fao.eng.fao 9.2 0.322
Tatoeba-test.eng-isl.eng.isl 23.8 0.506
Tatoeba-test.eng.multi 52.8 0.688
Tatoeba-test.eng-non.eng.non 0.7 0.196
Tatoeba-test.eng-nor.eng.nor 50.3 0.678
Tatoeba-test.eng-swe.eng.swe 57.8 0.717

System Info: