metadata
task_categories:
- summarization
language:
- en
size_categories:
- 1K<n<10K
To create this dataset, the test split of CNN DAILYMAIL was filtered by using the code by Aumiller et al. (1) available at https://github.com/dennlinger/summaries/tree/main with following settings: min_length_summary = 18; min_length_reference = 250; length_metric = "whitespace" min_compression_ratio = 2.5
Furthermore:
- line breaks: every \n in the reference summaries (column "reference-summary") was replaced by a space. The articles (column "text") did not contain any line breaks
- non-breaking spaces: every \xa0 (both in articles and in summaries) was replaced by a space
- bi-gram_overlap_fraction between summary and original text <= 0.63, meaning that all summaries in the dataset are on the abstractive side
Then, 5k random rows were selected and kept in the dataset. All other rows were removed.