Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-en-afa Helsinki-NLP/opus-mt-en-afa
13 downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-afa") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-afa")
Uploaded in S3

eng-afa

  • source group: English

  • target group: Afro-Asiatic languages

  • OPUS readme: eng-afa

  • model: transformer

  • source language(s): eng

  • target language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)

  • download original weights: opus2m-2020-08-01.zip

  • test set translations: opus2m-2020-08-01.test.txt

  • test set scores: opus2m-2020-08-01.eval.txt

Benchmarks

testset BLEU chr-F
Tatoeba-test.eng-amh.eng.amh 11.6 0.504
Tatoeba-test.eng-ara.eng.ara 12.0 0.404
Tatoeba-test.eng-hau.eng.hau 10.2 0.429
Tatoeba-test.eng-heb.eng.heb 32.3 0.551
Tatoeba-test.eng-kab.eng.kab 1.6 0.191
Tatoeba-test.eng-mlt.eng.mlt 17.7 0.551
Tatoeba-test.eng.multi 14.4 0.375
Tatoeba-test.eng-rif.eng.rif 1.7 0.103
Tatoeba-test.eng-shy.eng.shy 0.8 0.090
Tatoeba-test.eng-som.eng.som 16.0 0.429
Tatoeba-test.eng-tir.eng.tir 2.7 0.238

System Info: