Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
hpprc commited on
Commit
6451fb4
1 Parent(s): 6fbe8d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -45,3 +45,60 @@ configs:
45
  - split: test
46
  path: data/test-*
47
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  - split: test
46
  path: data/test-*
47
  ---
48
+
49
+ # WikiSplit++
50
+
51
+ This dataset is the HuggingFace version of WikiSplit++.
52
+ WikiSplit++ enhances the original WikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original WikiSplit.
53
+ The preprocessed WikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/wikisplit).
54
+
55
+ ## Usage
56
+
57
+ ```python
58
+ import datasets as ds
59
+
60
+ dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/wikisplit-pp", split="train")
61
+
62
+ print(dataset)
63
+
64
+ # DatasetDict({
65
+ # train: Dataset({
66
+ # features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
67
+ # num_rows: 504375
68
+ # })
69
+ # validation: Dataset({
70
+ # features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
71
+ # num_rows: 63065
72
+ # })
73
+ # test: Dataset({
74
+ # features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'],
75
+ # num_rows: 62993
76
+ # })
77
+ # })
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ - id: The ID of the data (note that it is not compatible with the existing WikiSplit)
83
+ - complex: A complex sentence
84
+ - simple_reversed: Simple sentences with their order reversed
85
+ - simple_tokenized: A list of simple sentences split by [PySBD](https://github.com/nipunsadvilkar/pySBD), not reversed in order (often consists of 2 elements)
86
+ - simple_original: Simple sentences in their original order
87
+ - entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. [DeBERTa-xxl](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli) is used for the NLI classification.
88
+ - split: Indicates which split (train, val, or tune) this data belonged to in the original WikiSplit dataset
89
+
90
+ ## Paper
91
+
92
+ Tsukagoshi et al., [WikiSplit++: Easy Data Refinement for Split and Rephrase](https://arxiv.org/abs/2404.09002), LREC-COLING 2024.
93
+
94
+ ## Abstract
95
+
96
+ The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).
97
+ However, while Split and Rephrase can be improved using a text-to-text generation approach that applies encoder-decoder models fine-tuned with a large-scale dataset, it still suffers from hallucinations and under-splitting.
98
+ To address these issues, this paper presents a simple and strong data refinement approach.
99
+ Here, we create WikiSplit++ by removing instances in WikiSplit where complex sentences do not entail at least one of the simpler sentences and reversing the order of reference simple sentences.
100
+ Experimental results show that training with WikiSplit++ leads to better performance than training with WikiSplit, even with fewer training instances.
101
+ In particular, our approach yields significant gains in the number of splits and the entailment ratio, a proxy for measuring hallucinations.
102
+
103
+ ## License
104
+