julien-c's picture
julien-c HF staff
Migrate model card from transformers-repo
0969ea6
metadata
language:
  - en
  - de
thumbnail: null
tags:
  - translation
  - wmt16
  - allenai
license: apache-2.0
datasets:
  - wmt16
metrics:
  - bleu

FSMT

Model description

This is a ported version of fairseq-based wmt16 transformer for en-de.

For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.

All 3 models are available:

Intended uses & limitations

How to use

from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-12-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)

input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?

Limitations and bias

Training data

Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.

Eval results

Here are the BLEU scores:

model fairseq transformers
wmt16-en-de-dist-12-1 28.3 27.52

The score is slightly below the score reported in the paper, as the researchers don't use sacrebleu and measure the score on tokenized outputs. transformers score was measured using sacrebleu on detokenized outputs.

The score was calculated using this code:

git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS

Data Sources

BibTeX entry and citation info

@misc{kasai2020deep,
    title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
    author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
    year={2020},
    eprint={2006.10369},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}