Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-sem-sem Helsinki-NLP/opus-mt-sem-sem
N/a downloads
last 30 days



Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-sem-sem") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-sem-sem")
Uploaded in S3


  • source group: Semitic languages

  • target group: Semitic languages

  • OPUS readme: sem-sem

  • model: transformer

  • source language(s): apc ara arq arz heb mlt

  • target language(s): apc ara arq arz heb mlt

  • model: transformer

  • pre-processing: normalization + SentencePiece (spm32k,spm32k)

  • a sentence initial language token is required in the form of >>id<< (id = valid target language ID)

  • download original weights:

  • test set translations: opus-2020-07-27.test.txt

  • test set scores: opus-2020-07-27.eval.txt


testset BLEU chr-F
Tatoeba-test.ara-ara.ara.ara 4.2 0.200
Tatoeba-test.ara-heb.ara.heb 34.0 0.542
Tatoeba-test.ara-mlt.ara.mlt 16.6 0.513
Tatoeba-test.heb-ara.heb.ara 18.8 0.477
Tatoeba-test.mlt-ara.mlt.ara 20.7 0.388
Tatoeba-test.multi.multi 27.1 0.507

System Info: