albertvillanova's picture
Convert dataset to Parquet (#4)
4e21184 verified
|
raw
history blame
2.21 kB
metadata
language:
  - en
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
task_categories:
  - summarization
  - text-generation
task_ids: []
tags:
  - conditional-text-generation
dataset_info:
  config_name: document
  features:
    - name: report
      dtype: string
    - name: summary
      dtype: string
  splits:
    - name: train
      num_bytes: 953321013
      num_examples: 17517
    - name: validation
      num_bytes: 55820431
      num_examples: 973
    - name: test
      num_bytes: 51591123
      num_examples: 973
  download_size: 506610432
  dataset_size: 1060732567
configs:
  - config_name: document
    data_files:
      - split: train
        path: document/train-*
      - split: validation
        path: document/validation-*
      - split: test
        path: document/test-*
    default: true

GovReport dataset for summarization

Dataset for summarization of long documents.
Adapted from this repo and this paper
This dataset is compatible with the run_summarization.py script from Transformers if you add this line to the summarization_name_mapping variable:

"ccdv/govreport-summarization": ("report", "summary")

Data Fields

  • id: paper id
  • report: a string containing the body of the report
  • summary: a string containing the summary of the report

Data Splits

This dataset has 3 splits: train, validation, and test.
Token counts with a RoBERTa tokenizer.

Dataset Split Number of Instances Avg. tokens
Train 17,517 < 9,000 / < 500
Validation 973 < 9,000 / < 500
Test 973 < 9,000 / < 500

Cite original article

@misc{huang2021efficient,
      title={Efficient Attentions for Long Document Summarization}, 
      author={Luyang Huang and Shuyang Cao and Nikolaus Parulian and Heng Ji and Lu Wang},
      year={2021},
      eprint={2104.02112},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
    }