metadata
dataset_info:
features:
- name: id
dtype: int64
- name: complex
dtype: string
- name: simple
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 309170607
num_examples: 795585
- name: validation
num_bytes: 38667164
num_examples: 99448
- name: test
num_bytes: 38650132
num_examples: 99448
- name: all
num_bytes: 386487903
num_examples: 994481
download_size: 540598777
dataset_size: 772975806
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: all
path: data/all-*
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
pretty_name: WikiSplit
size_categories:
- 100K<n<1M
Preprocessed version of WikiSplit.
Since the original WikiSplit dataset was tokenized and had some noises, we have used the Moses detokenizer for detokenization and removed text fragments.
For detailed information on the preprocessing steps, please see here.
This preprocessed dataset serves as the basis for WikiSplit++.
Dataset Description
- Repository: https://github.com/nttcslab-nlp/wikisplit-pp
- Used Paper: https://arxiv.org/abs/2404.09002