Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,458 Bytes
5742522
6fbe8d0
 
7921892
6fbe8d0
 
 
 
 
e4019bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6fbe8d0
e4019bb
 
6fbe8d0
e4019bb
 
6fbe8d0
e4019bb
6fbe8d0
 
e4019bb
 
 
 
 
 
 
 
 
ed13a1f
6451fb4
 
 
 
 
 
 
e5240e8
 
 
 
 
 
 
 
6451fb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
171d315
5b196e9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10M<n<100M
task_categories:
- text2text-generation
pretty_name: WikiSplit++
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: complex
    dtype: string
  - name: simple_reversed
    dtype: string
  - name: simple_tokenized
    sequence: string
  - name: simple_original
    dtype: string
  - name: entailment_prob
    dtype: float64
  - name: split
    dtype: string
  splits:
  - name: train
    num_bytes: 380811358.0
    num_examples: 504375
  - name: validation
    num_bytes: 47599265.0
    num_examples: 63065
  - name: test
    num_bytes: 47559833.0
    num_examples: 62993
  download_size: 337857760
  dataset_size: 475970456.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
---

# WikiSplit++

This dataset is the HuggingFace version of WikiSplit++.  
WikiSplit++ enhances the original WikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original WikiSplit.  
The preprocessed WikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/wikisplit).


## Dataset Description

- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
- **Paper:** https://arxiv.org/abs/2404.09002
- **Point of Contact:** [Hayato Tsukagoshi](mailto:tsukagoshi.hayato.r2@s.mail.nagoya-u.ac.jp)


## Usage

```python
import datasets as ds

dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/wikisplit-pp", split="train")

print(dataset)

# DatasetDict({
#     train: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
#         num_rows: 504375
#     })
#     validation: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
#         num_rows: 63065
#     })
#     test: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
#         num_rows: 62993
#     })
# })
```

### Data Fields

- id: The ID of the data (note that it is not compatible with the existing WikiSplit)
- complex: A complex sentence
- simple_reversed: Simple sentences with their order reversed
- simple_tokenized: A list of simple sentences split by [PySBD](https://github.com/nipunsadvilkar/pySBD), not reversed in order (often consists of 2 elements)
- simple_original: Simple sentences in their original order
- entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. [DeBERTa-xxl](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli) is used for the NLI classification.
- split: Indicates which split (train, val, or tune) this data belonged to in the original WikiSplit dataset

## Paper

Tsukagoshi et al., [WikiSplit++: Easy Data Refinement for Split and Rephrase](https://arxiv.org/abs/2404.09002), LREC-COLING 2024.

## Abstract

The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).  
However, while Split and Rephrase can be improved using a text-to-text generation approach that applies encoder-decoder models fine-tuned with a large-scale dataset, it still suffers from hallucinations and under-splitting.  
To address these issues, this paper presents a simple and strong data refinement approach.   
Here, we create WikiSplit++ by removing instances in WikiSplit where complex sentences do not entail at least one of the simpler sentences and reversing the order of reference simple sentences.  
Experimental results show that training with WikiSplit++ leads to better performance than training with WikiSplit, even with fewer training instances.  
In particular, our approach yields significant gains in the number of splits and the entailment ratio, a proxy for measuring hallucinations.  

## License

[WikiSplit](https://github.com/google-research-datasets/wiki-split) is distributed under the CC-BY-SA 4.0 license.  
This dataset follows suit and is distributed under the CC-BY-SA 4.0 license.