min-wikisplit-pp / README.md
hpprc's picture
Update README.md
d5749f7 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: complex
      dtype: string
    - name: simple_reversed
      dtype: string
    - name: simple_tokenized
      sequence: string
    - name: simple_original
      dtype: string
    - name: entailment_prob
      dtype: float64
  splits:
    - name: train
      num_bytes: 115032683
      num_examples: 139241
    - name: validation
      num_bytes: 14334442
      num_examples: 17424
    - name: test
      num_bytes: 14285722
      num_examples: 17412
  download_size: 91848881
  dataset_size: 143652847
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: cc-by-sa-4.0
task_categories:
  - text2text-generation
language:
  - en
pretty_name: MinWikiSplit++

MinWikiSplit++

This dataset is the HuggingFace version of MinWikiSplit++.
MinWikiSplit++ enhances the original MinWikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original MinWikiSplit.
The preprocessed MinWikiSplit dataset that formed the basis for this can be found here.

Dataset Description

Usage

import datasets as ds

dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/min-wikisplit-pp")

print(dataset)

# DatasetDict({
#     train: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
#         num_rows: 139241
#     })
#     validation: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
#         num_rows: 17424
#     })
#     test: Dataset({
#         features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
#         num_rows: 17412
#     })
# })

Data Fields

  • id: The ID of the data (note that it is not compatible with the existing MinWikiSplit)
  • complex: A complex sentence
  • simple_reversed: Simple sentences with their order reversed
  • simple_tokenized: A list of simple sentences split by PySBD, not reversed in order
  • simple_original: Simple sentences in their original order
  • entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. DeBERTa-xxl is used for the NLI classification.

Paper

Tsukagoshi et al., WikiSplit++: Easy Data Refinement for Split and Rephrase, LREC-COLING 2024.

License

MinWikiSplit is build upon the WikiSplit dataset, which is distributed under the CC-BY-SA 4.0 license.
Therefore, this dataset follows suit and is distributed under the CC-BY-SA 4.0 license.