albertvillanova's picture
Add 'document' config data files
b62568e verified
|
raw
history blame
3.8 kB
metadata
language:
  - en
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
task_categories:
  - summarization
  - text-generation
task_ids: []
tags:
  - conditional-text-generation
dataset_info:
  - config_name: document
    features:
      - name: article
        dtype: string
      - name: abstract
        dtype: string
    splits:
      - name: train
        num_bytes: 2236406736
        num_examples: 119924
      - name: validation
        num_bytes: 126510743
        num_examples: 6633
      - name: test
        num_bytes: 126296182
        num_examples: 6658
    download_size: 1154975484
    dataset_size: 2489213661
  - config_name: section
    features:
      - name: article
        dtype: string
      - name: abstract
        dtype: string
    splits:
      - name: train
        num_bytes: 2257744955
        num_examples: 119924
      - name: validation
        num_bytes: 127711559
        num_examples: 6633
      - name: test
        num_bytes: 127486937
        num_examples: 6658
    download_size: 1163165290
    dataset_size: 2512943451
configs:
  - config_name: document
    data_files:
      - split: train
        path: document/train-*
      - split: validation
        path: document/validation-*
      - split: test
        path: document/test-*
  - config_name: section
    data_files:
      - split: train
        path: section/train-*
      - split: validation
        path: section/validation-*
      - split: test
        path: section/test-*
    default: true

PubMed dataset for summarization

Dataset for summarization of long documents.
Adapted from this repo.
Note that original data are pre-tokenized so this dataset returns " ".join(text) and add "\n" for paragraphs.
This dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:

"ccdv/pubmed-summarization": ("article", "abstract")

Data Fields

  • id: paper id
  • article: a string containing the body of the paper
  • abstract: a string containing the abstract of the paper

Data Splits

This dataset has 3 splits: train, validation, and test.
Token counts are white space based.

Dataset Split Number of Instances Avg. tokens
Train 119,924 3043 / 215
Validation 6,633 3111 / 216
Test 6,658 3092 / 219

Cite original article

@inproceedings{cohan-etal-2018-discourse,
  title = "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents",
  author = "Cohan, Arman  and
    Dernoncourt, Franck  and
    Kim, Doo Soon  and
    Bui, Trung  and
    Kim, Seokhwan  and
    Chang, Walter  and
    Goharian, Nazli",
  booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
  month = jun,
  year = "2018",
  address = "New Orleans, Louisiana",
  publisher = "Association for Computational Linguistics",
  url = "https://aclanthology.org/N18-2097",
  doi = "10.18653/v1/N18-2097",
  pages = "615--621",
  abstract = "Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.",
}