|
--- |
|
license: cc-by-4.0 |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: complex |
|
dtype: string |
|
- name: simple_reversed |
|
dtype: string |
|
- name: simple_tokenized |
|
sequence: string |
|
- name: simple_original |
|
dtype: string |
|
- name: entailment_prob |
|
dtype: float64 |
|
- name: is_entailment |
|
dtype: bool |
|
- name: split |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 380874414 |
|
num_examples: 504375 |
|
- name: validation |
|
num_bytes: 47607153 |
|
num_examples: 63065 |
|
- name: test |
|
num_bytes: 47567721 |
|
num_examples: 62993 |
|
download_size: 338503258 |
|
dataset_size: 476049288 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
task_categories: |
|
- text2text-generation |
|
language: |
|
- en |
|
pretty_name: WikiSplit++ |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
|
|
|
|
Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468). |
|
|
|
Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments. |
|
|
|
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py). |
|
|
|
This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp). |