Datasets:
metadata
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: int64
- name: complex
dtype: string
- name: simple_reversed
dtype: string
- name: simple_tokenized
sequence: string
- name: simple_original
dtype: string
- name: entailment_prob
dtype: float64
- name: is_entailment
dtype: bool
- name: split
dtype: string
splits:
- name: train
num_bytes: 380874414
num_examples: 504375
- name: validation
num_bytes: 47607153
num_examples: 63065
- name: test
num_bytes: 47567721
num_examples: 62993
download_size: 338503258
dataset_size: 476049288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
task_categories:
- text2text-generation
language:
- en
pretty_name: WikiSplit++
size_categories:
- 10M<n<100M
Preprocessed version of WikiSplit.
Since the original WikiSplit dataset was tokenized and had some noises, we have used the Moses detokenizer for detokenization and removed text fragments.
For detailed information on the preprocessing steps, please see here.
This preprocessed dataset serves as the basis for WikiSplit++.