Back to all models
translation mask_token:
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Monthly model downloads

Helsinki-NLP/opus-mt-zlw-en Helsinki-NLP/opus-mt-zlw-en
14 downloads
last 30 days

pytorch

tf

Contributed by

Language Technology Research Group at the University of Helsinki university
1 team member · 1325 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zlw-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zlw-en")
Uploaded in S3

zlw-eng

Benchmarks

testset BLEU chr-F
newssyscomb2009-ceseng.ces.eng 25.7 0.536
newstest2009-ceseng.ces.eng 24.6 0.530
newstest2010-ceseng.ces.eng 25.0 0.540
newstest2011-ceseng.ces.eng 25.9 0.539
newstest2012-ceseng.ces.eng 24.8 0.533
newstest2013-ceseng.ces.eng 27.8 0.551
newstest2014-csen-ceseng.ces.eng 30.3 0.585
newstest2015-encs-ceseng.ces.eng 27.5 0.542
newstest2016-encs-ceseng.ces.eng 29.1 0.564
newstest2017-encs-ceseng.ces.eng 26.0 0.537
newstest2018-encs-ceseng.ces.eng 27.3 0.544
Tatoeba-test.ces-eng.ces.eng 53.3 0.691
Tatoeba-test.csb-eng.csb.eng 10.2 0.313
Tatoeba-test.dsb-eng.dsb.eng 11.7 0.296
Tatoeba-test.hsb-eng.hsb.eng 24.6 0.426
Tatoeba-test.multi.eng 51.8 0.680
Tatoeba-test.pol-eng.pol.eng 50.4 0.667

System Info: