|
--- |
|
--- |
|
|
|
# Dataset Card for "reddit" |
|
|
|
## Table of Contents |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Supported Tasks](#supported-tasks) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits Sample Size](#data-splits-sample-size) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
- [Considerations for Using the Data](#considerations-for-using-the-data) |
|
- [Social Impact of Dataset](#social-impact-of-dataset) |
|
- [Discussion of Biases](#discussion-of-biases) |
|
- [Other Known Limitations](#other-known-limitations) |
|
- [Additional Information](#additional-information) |
|
- [Dataset Curators](#dataset-curators) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## [Dataset Description](#dataset-description) |
|
|
|
- **Homepage:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus) |
|
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
- **Size of downloaded dataset files:** 2996.31 MB |
|
- **Size of the generated dataset:** 18063.11 MB |
|
- **Total amount of disk used:** 21059.41 MB |
|
|
|
### [Dataset Summary](#dataset-summary) |
|
|
|
This corpus contains preprocessed posts from the Reddit dataset. |
|
The dataset consists of 3,848,330 posts with an average length of 270 words for content, |
|
and 28 words for the summary. |
|
|
|
Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id. |
|
Content is used as document and summary is used as summary. |
|
|
|
### [Supported Tasks](#supported-tasks) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Languages](#languages) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## [Dataset Structure](#dataset-structure) |
|
|
|
We show detailed information for up to 5 configurations of the dataset. |
|
|
|
### [Data Instances](#data-instances) |
|
|
|
#### default |
|
|
|
- **Size of downloaded dataset files:** 2996.31 MB |
|
- **Size of the generated dataset:** 18063.11 MB |
|
- **Total amount of disk used:** 21059.41 MB |
|
|
|
An example of 'train' looks as follows. |
|
``` |
|
{ |
|
"author": "me", |
|
"body": "<>", |
|
"content": "input document.", |
|
"id": "1", |
|
"normalizedBody": "", |
|
"subreddit": "machinelearning", |
|
"subreddit_id": "2", |
|
"summary": "output summary." |
|
} |
|
``` |
|
|
|
### [Data Fields](#data-fields) |
|
|
|
The data fields are the same among all splits. |
|
|
|
#### default |
|
- `author`: a `string` feature. |
|
- `body`: a `string` feature. |
|
- `normalizedBody`: a `string` feature. |
|
- `subreddit`: a `string` feature. |
|
- `subreddit_id`: a `string` feature. |
|
- `id`: a `string` feature. |
|
- `content`: a `string` feature. |
|
- `summary`: a `string` feature. |
|
|
|
### [Data Splits Sample Size](#data-splits-sample-size) |
|
|
|
| name | train | |
|
|-------|------:| |
|
|default|3848330| |
|
|
|
## [Dataset Creation](#dataset-creation) |
|
|
|
### [Curation Rationale](#curation-rationale) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Source Data](#source-data) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Annotations](#annotations) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Personal and Sensitive Information](#personal-and-sensitive-information) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## [Considerations for Using the Data](#considerations-for-using-the-data) |
|
|
|
### [Social Impact of Dataset](#social-impact-of-dataset) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Discussion of Biases](#discussion-of-biases) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Other Known Limitations](#other-known-limitations) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
## [Additional Information](#additional-information) |
|
|
|
### [Dataset Curators](#dataset-curators) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Licensing Information](#licensing-information) |
|
|
|
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
|
|
|
### [Citation Information](#citation-information) |
|
|
|
``` |
|
|
|
@inproceedings{volske-etal-2017-tl, |
|
title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization", |
|
author = {V{"o}lske, Michael and |
|
Potthast, Martin and |
|
Syed, Shahbaz and |
|
Stein, Benno}, |
|
booktitle = "Proceedings of the Workshop on New Frontiers in Summarization", |
|
month = sep, |
|
year = "2017", |
|
address = "Copenhagen, Denmark", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://www.aclweb.org/anthology/W17-4508", |
|
doi = "10.18653/v1/W17-4508", |
|
pages = "59--63", |
|
abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.", |
|
} |
|
|
|
``` |
|
|
|
|
|
### Contributions |
|
|
|
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |