Back to all datasets
Dataset: wmt14 🏷
Update on GitHub

How to load this dataset directly with the πŸ€—/datasets library:

Copy to clipboard
from datasets import load_dataset dataset = load_dataset("wmt14")


None yet.

You can create or edit a tag set using our tagging app.

Models trained or fine-tuned on wmt14

Dataset Card for "wmt14"

Table of Contents

Dataset Description

Dataset Summary

Translate dataset based on the data from

Versions exists for the different years using a combination of multiple data sources. The base wmt_translate allows you to create your own config to choose your own data/language pair by creating a custom datasets.translate.wmt.WmtConfig.

config = datasets.wmt.WmtConfig(
    language_pair=("fr", "de"),
        datasets.Split.TRAIN: ["commoncrawl_frde"],
        datasets.Split.VALIDATION: ["euelections_dev2019"],
builder = datasets.builder("wmt_translate", config=config)

Supported Tasks

More Information Needed


More Information Needed

Dataset Structure

We show detailed information for up to 5 configurations of the dataset.

Data Instances


  • Size of downloaded dataset files: 1616.49 MB
  • Size of the generated dataset: 269.84 MB
  • Total amount of disk used: 1886.33 MB

An example of 'train' looks as follows.

Data Fields

The data fields are the same among all splits.


  • translation: a multilingual string variable, with possible languages including cs, en.

Data Splits Sample Size

name train validation test
cs-en 953621 3000 3003

Dataset Creation

Curation Rationale

More Information Needed

Source Data

More Information Needed


More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

  author    = {Bojar, Ondrej  and  Buck, Christian  and  Federmann, Christian  and  Haddow, Barry  and  Koehn, Philipp  and  Leveling, Johannes  and  Monz, Christof  and  Pecina, Pavel  and  Post, Matt  and  Saint-Amand, Herve  and  Soricut, Radu  and  Specia, Lucia  and  Tamchyna, Ale{s}},
  title     = {Findings of the 2014 Workshop on Statistical Machine Translation},
  booktitle = {Proceedings of the Ninth Workshop on Statistical Machine Translation},
  month     = {June},
  year      = {2014},
  address   = {Baltimore, Maryland, USA},
  publisher = {Association for Computational Linguistics},
  pages     = {12--58},
  url       = {}