Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-bnt-en Helsinki-NLP/opus-mt-bnt-en
13 downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-bnt-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-bnt-en")
Uploaded in S3

bnt-eng

  • source group: Bantu languages

  • target group: English

  • OPUS readme: bnt-eng

  • model: transformer

  • source language(s): kin lin lug nya run sna swh toi_Latn tso umb xho zul

  • target language(s): eng

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • download original weights: opus2m-2020-07-31.zip

  • test set translations: opus2m-2020-07-31.test.txt

  • test set scores: opus2m-2020-07-31.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.kin-eng.kin.eng 31.7 0.481
Tatoeba-test.lin-eng.lin.eng 8.3 0.271
Tatoeba-test.lug-eng.lug.eng 5.3 0.128
Tatoeba-test.multi.eng 23.1 0.394
Tatoeba-test.nya-eng.nya.eng 38.3 0.527
Tatoeba-test.run-eng.run.eng 26.6 0.431
Tatoeba-test.sna-eng.sna.eng 27.5 0.440
Tatoeba-test.swa-eng.swa.eng 4.6 0.195
Tatoeba-test.toi-eng.toi.eng 16.2 0.342
Tatoeba-test.tso-eng.tso.eng 100.0 1.000
Tatoeba-test.umb-eng.umb.eng 8.4 0.231
Tatoeba-test.xho-eng.xho.eng 37.2 0.554
Tatoeba-test.zul-eng.zul.eng 40.9 0.576

System Info: