Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-pqe-en Helsinki-NLP/opus-mt-pqe-en
N/a downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-pqe-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-pqe-en")
Uploaded in S3

pqe-eng

  • source group: Eastern Malayo-Polynesian languages

  • target group: English

  • OPUS readme: pqe-eng

  • model: transformer

  • source language(s): fij gil haw mah mri nau niu rap smo tah ton tvl

  • target language(s): eng

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • download original weights: opus-2020-06-28.zip

  • test set translations: opus-2020-06-28.test.txt

  • test set scores: opus-2020-06-28.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.fij-eng.fij.eng 26.9 0.361
Tatoeba-test.gil-eng.gil.eng 49.0 0.618
Tatoeba-test.haw-eng.haw.eng 1.6 0.126
Tatoeba-test.mah-eng.mah.eng 13.7 0.257
Tatoeba-test.mri-eng.mri.eng 7.4 0.250
Tatoeba-test.multi.eng 12.6 0.268
Tatoeba-test.nau-eng.nau.eng 2.3 0.125
Tatoeba-test.niu-eng.niu.eng 34.4 0.471
Tatoeba-test.rap-eng.rap.eng 10.3 0.215
Tatoeba-test.smo-eng.smo.eng 28.5 0.413
Tatoeba-test.tah-eng.tah.eng 12.1 0.199
Tatoeba-test.ton-eng.ton.eng 41.8 0.517
Tatoeba-test.tvl-eng.tvl.eng 42.9 0.540

System Info: