Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-afa-en Helsinki-NLP/opus-mt-afa-en
19 downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-afa-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-afa-en")
Uploaded in S3

afa-eng

  • source group: Afro-Asiatic languages

  • target group: English

  • OPUS readme: afa-eng

  • model: transformer

  • source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir

  • target language(s): eng

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • download original weights: opus2m-2020-07-31.zip

  • test set translations: opus2m-2020-07-31.test.txt

  • test set scores: opus2m-2020-07-31.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.amh-eng.amh.eng 35.9 0.550
Tatoeba-test.ara-eng.ara.eng 36.6 0.543
Tatoeba-test.hau-eng.hau.eng 11.9 0.327
Tatoeba-test.heb-eng.heb.eng 42.7 0.591
Tatoeba-test.kab-eng.kab.eng 4.3 0.213
Tatoeba-test.mlt-eng.mlt.eng 44.3 0.618
Tatoeba-test.multi.eng 27.1 0.464
Tatoeba-test.rif-eng.rif.eng 3.5 0.141
Tatoeba-test.shy-eng.shy.eng 0.6 0.125
Tatoeba-test.som-eng.som.eng 23.6 0.472
Tatoeba-test.tir-eng.tir.eng 13.1 0.328

System Info: