File size: 1,582 Bytes
5742522
6cf544f
e4019bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6cf544f
 
 
 
 
 
 
ed13a1f
 
 
 
 
cdcd892
ed13a1f
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: cc-by-4.0
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: complex
    dtype: string
  - name: simple_reversed
    dtype: string
  - name: simple_tokenized
    sequence: string
  - name: simple_original
    dtype: string
  - name: entailment_prob
    dtype: float64
  - name: is_entailment
    dtype: bool
  - name: split
    dtype: string
  splits:
  - name: train
    num_bytes: 380874414
    num_examples: 504375
  - name: validation
    num_bytes: 47607153
    num_examples: 63065
  - name: test
    num_bytes: 47567721
    num_examples: 62993
  download_size: 338503258
  dataset_size: 476049288
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
task_categories:
- text2text-generation
language:
- en
pretty_name: WikiSplit++
size_categories:
- 10M<n<100M
---


Preprocessed version of [WikiSplit](https://arxiv.org/abs/1808.09468).

Since the [original WikiSplit dataset](https://huggingface.co/datasets/wiki_split) was tokenized and had some noises, we have used the [Moses detokenizer](https://github.com/moses-smt/mosesdecoder/blob/c41ff18111f58907f9259165e95e657605f4c457/scripts/tokenizer/detokenizer.perl) for detokenization and removed text fragments.

For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/common.py).

This preprocessed dataset serves as the basis for [WikiSplit++](https://huggingface.co/datasets/cl-nagoya/wikisplit-pp).