Edit model card

zho-eng

Table of Contents

Model Details

  • Model Description:
  • Developed by: Language Technology Research Group at the University of Helsinki
  • Model Type: Translation
  • Language(s):
    • Source Language: Chinese
    • Target Language: English
  • License: CC-BY-4.0
  • Resources for more information:

Uses

Direct Use

This model can be used for translation and text-to-text generation.

Risks, Limitations and Biases

CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).

Further details about the dataset for this model can be found in the OPUS readme: zho-eng

Training

System Information

  • helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
  • transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
  • port_machine: brutasse
  • port_time: 2020-08-21-14:41
  • src_multilingual: False
  • tgt_multilingual: False

Training Data

Preprocessing

Evaluation

Results

Benchmarks

testset BLEU chr-F
Tatoeba-test.zho.eng 36.1 0.548

Citation Information

@InProceedings{TiedemannThottingal:EAMT2020,
  author = {J{\"o}rg Tiedemann and Santhosh Thottingal},
  title = {{OPUS-MT} β€” {B}uilding open translation services for the {W}orld},
  booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)},
  year = {2020},
  address = {Lisbon, Portugal}
 }

How to Get Started With the Model

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")

model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.