Dataset Preview
Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    NotImplementedError
Message:      Extraction protocol for TAR archives like 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-cs.zipporah0-dedup-clean.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1739, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1025, in as_streaming_dataset
                  splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
                File "/tmp/modules-cache/datasets_modules/datasets/wmt18/555a687a5c9cfe8c9393a70305bfb58f26e8326f22dbf99767a28b298b6fa144/wmt_utils.py", line 754, in _split_generators
                  downloaded_files = dl_manager.download_and_extract(urls_to_download)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 944, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 907, in extract
                  urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
                  mapped = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
                  _single_map_nested((function, obj, types, None, True, None))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 348, in _single_map_nested
                  mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 348, in <listcomp>
                  mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 912, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 390, in _get_extraction_protocol
                  raise NotImplementedError(
              NotImplementedError: Extraction protocol for TAR archives like 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-cs.zipporah0-dedup-clean.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "wmt18"

Dataset Summary

Translation dataset based on the data from statmt.org.

Versions exist for different years using a combination of data sources. The base wmt allows you to create a custom dataset by choosing your own data/language pair. This can be done as follows:

from datasets import inspect_dataset, load_dataset_builder

inspect_dataset("wmt18", "path/to/scripts")
builder = load_dataset_builder(
    "path/to/scripts/wmt_utils.py",
    language_pair=("fr", "de"),
    subsets={
        datasets.Split.TRAIN: ["commoncrawl_frde"],
        datasets.Split.VALIDATION: ["euelections_dev2019"],
    },
)

# Standard version
builder.download_and_prepare()
ds = builder.as_dataset()

# Streamable version
ds = builder.as_streaming_dataset()

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

cs-en

  • Size of downloaded dataset files: 1935.34 MB
  • Size of the generated dataset: 1394.65 MB
  • Total amount of disk used: 3329.99 MB

An example of 'validation' looks as follows.


Data Fields

The data fields are the same among all splits.

cs-en

  • translation: a multilingual string variable, with possible languages including cs, en.

Data Splits

name train validation test
cs-en 11046024 3005 2983

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

@InProceedings{bojar-EtAl:2018:WMT1,
  author    = {Bojar, Ond{r}ej  and  Federmann, Christian  and  Fishel, Mark
    and Graham, Yvette  and  Haddow, Barry  and  Huck, Matthias  and
    Koehn, Philipp  and  Monz, Christof},
  title     = {Findings of the 2018 Conference on Machine Translation (WMT18)},
  booktitle = {Proceedings of the Third Conference on Machine Translation,
    Volume 2: Shared Task Papers},
  month     = {October},
  year      = {2018},
  address   = {Belgium, Brussels},
  publisher = {Association for Computational Linguistics},
  pages     = {272--307},
  url       = {http://www.aclweb.org/anthology/W18-6401}
}

Contributions

Thanks to @thomwolf, @patrickvonplaten for adding this dataset.