Datasets:

Task Categories: translation
Multilinguality: translation
Size Categories: 10K<n<100K
Language Creators: expert-generated found
Annotations Creators: no-annotation
Source Datasets: original
Licenses: unknown
Dataset Preview Go to dataset viewer
The dataset preview is not available for this split.
Cannot load the dataset split (in normal download mode) to extract the first rows.
Error code:   NormalRowsError
Exception:    FileNotFoundError
Message:      Couldn't find file at https://drive.google.com/u/0/uc?id=1H7FphKVVCYoH49sUXl79CuztEfJLaKoF&export=download
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
                  rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1739, in load_dataset
                  return builder_instance.as_streaming_dataset(split=split)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1025, in as_streaming_dataset
                  splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
                File "/tmp/modules-cache/datasets_modules/datasets/poleval2019_mt/472a2b9646b8af4eea41e21f83985dd3a7ad0a803603190f3966ac2b5c703661/poleval2019_mt.py", line 134, in _split_generators
                  dl_file_src = dl_manager.download_and_extract(urls[split + "." + source])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 944, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 907, in extract
                  urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 912, in _extract
                  protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 401, in _get_extraction_protocol
                  with fsspec.open(urlpath, **kwargs) as f:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 441, in open
                  return open_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 195, in __getitem__
                  out = super().__getitem__(item)
              IndexError: list index out of range
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/responses/first_rows.py", line 345, in get_first_rows_response
                  rows = get_rows(
                File "/src/services/worker/src/worker/utils.py", line 123, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/responses/first_rows.py", line 65, in get_rows
                  ds = load_dataset(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
                  builder_instance.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
                  super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
                  split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
                File "/tmp/modules-cache/datasets_modules/datasets/poleval2019_mt/472a2b9646b8af4eea41e21f83985dd3a7ad0a803603190f3966ac2b5c703661/poleval2019_mt.py", line 134, in _split_generators
                  dl_file_src = dl_manager.download_and_extract(urls[split + "." + source])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
                  return self.extract(self.download(url_or_urls))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 309, in download
                  downloaded_path_or_paths = map_nested(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
                  return function(data_struct)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/download_manager.py", line 335, in _download
                  return cached_path(url_or_filename, download_config=download_config)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 185, in cached_path
                  output_path = get_from_cache(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 530, in get_from_cache
                  raise FileNotFoundError(f"Couldn't find file at {url}")
              FileNotFoundError: Couldn't find file at https://drive.google.com/u/0/uc?id=1H7FphKVVCYoH49sUXl79CuztEfJLaKoF&export=download

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for poleval2019_mt

Dataset Summary

PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish. Submitted solutions compete against one another within certain tasks selected by organizers, using available data and are evaluated according to pre-established procedures. One of the tasks in PolEval-2019 was Machine Translation (Task-4).

The task is to train as good as possible machine translation system, using any technology,with limited textual resources. The competition will be done for 2 language pairs, more popular English-Polish (into Polish direction) and pair that can be called low resourced Russian-Polish (in both directions).

Here, Polish-English is also made available to allow for training in both directions. However, the test data is ONLY available for English-Polish

Supported Tasks and Leaderboards

Supports Machine Translation between Russian to Polish and English to Polish (and vice versa).

Languages

  • Polish (pl)
  • Russian (ru)
  • English (en)

Dataset Structure

Data Instances

As the training data set, a set of bi-lingual corpora aligned at the sentence level has been prepared. The corpora are saved in UTF-8 encoding as plain text, one language per file.

Data Fields

One example of the translation is as below:

{
  'translation': {'ru': 'не содержала в себе моделей. Модели это сравнительно новое явление. ', 
                  'pl': 'nie miała w sobie modeli. Modele to względnie nowa dziedzina. Tak więc, jeśli '}
}

Data Splits

The dataset is divided into two splits. All the headlines are scraped from news websites on the internet.

train validation test
ru-pl 20001 3001 2969
pl-ru 20001 3001 2969
en-pl 129255 1000 9845

Dataset Creation

Curation Rationale

This data was curated as a task for the PolEval-2019. The task is to train as good as possible machine translation system, using any technology, with limited textual resources. The competition will be done for 2 language pairs, more popular English-Polish (into Polish direction) and pair that can be called low resourced Russian-Polish (in both directions).

PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish. Submitted tools compete against one another within certain tasks selected by organizers, using available data and are evaluated according to pre-established procedures.

PolEval 2019-related papers were presented at AI & NLP Workshop Day (Warsaw, May 31, 2019). The links for the top performing models on various tasks (including the Task-4: Machine Translation) is present in this link

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

The organization details of PolEval is present in this link

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@proceedings{ogr:kob:19:poleval,
  editor    = {Maciej Ogrodniczuk and Łukasz Kobyliński},
  title     = {{Proceedings of the PolEval 2019 Workshop}},
  year      = {2019},
  address   = {Warsaw, Poland},
  publisher = {Institute of Computer Science, Polish Academy of Sciences},
  url       = {http://2019.poleval.pl/files/poleval2019.pdf},
  isbn      = "978-83-63159-28-3"}
}

Contributions

Thanks to @vrindaprabhu for adding this dataset.

Edit dataset card
Evaluate models HF Leaderboard