Datasets:

Languages:
English
Size Categories:
10K<n<100K
ArXiv:
Tags:
License:
PeerSum / README.md
oaimli's picture
Update README.md
08f657c
metadata
license: apache-2.0
task_categories:
  - summarization
language:
  - en
pretty_name: PeerSum
size_categories:
  - 10K<n<100K

This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, Summarizing Multiple Documents with Conversational Structure for Meta-review Generation. The original code and datasets are public on GitHub.

Please use the following code to download the dataset with the datasets library from Huggingface.

from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')

The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:

* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)

You can also download the raw data from Google Drive. The raw data comprises more information and it can be used for other analysis for peer reviews.