Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
crowdsourced
Annotations Creators:
no-annotation
Source Datasets:
original
License:
anna-kay commited on
Commit
7009309
1 Parent(s): 02da8ec

Reddit dataset card additions (#3781)

Browse files

* Proposed changes are based on the official paper of the dataset. The name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well.

* README.md.bak removed

* mention Webis-TLDR-17 in the title

Co-authored-by: anna-kay <annakougioumtz@gmail.com>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/278db9f51222842cb591b64dbdc3f1f264667aa7

Files changed (1) hide show
  1. README.md +30 -14
README.md CHANGED
@@ -2,10 +2,10 @@
2
  languages:
3
  - en
4
  paperswithcode_id: reddit
5
- pretty_name: Reddit
6
  ---
7
 
8
- # Dataset Card for "reddit"
9
 
10
  ## Table of Contents
11
  - [Dataset Description](#dataset-description)
@@ -33,9 +33,9 @@ pretty_name: Reddit
33
 
34
  ## Dataset Description
35
 
36
- - **Homepage:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
37
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
39
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
  - **Size of downloaded dataset files:** 2996.31 MB
41
  - **Size of the generated dataset:** 18063.11 MB
@@ -43,7 +43,7 @@ pretty_name: Reddit
43
 
44
  ### Dataset Summary
45
 
46
- This corpus contains preprocessed posts from the Reddit dataset.
47
  The dataset consists of 3,848,330 posts with an average length of 270 words for content,
48
  and 28 words for the summary.
49
 
@@ -52,11 +52,20 @@ Content is used as document and summary is used as summary.
52
 
53
  ### Supported Tasks and Leaderboards
54
 
55
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
56
 
57
  ### Languages
58
 
59
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
60
 
61
  ## Dataset Structure
62
 
@@ -104,21 +113,26 @@ The data fields are the same among all splits.
104
  |-------|------:|
105
  |default|3848330|
106
 
 
 
107
  ## Dataset Creation
108
 
109
  ### Curation Rationale
110
 
111
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
 
113
  ### Source Data
114
 
 
 
115
  #### Initial Data Collection and Normalization
116
 
117
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
118
 
119
  #### Who are the source language producers?
120
 
121
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
122
 
123
  ### Annotations
124
 
@@ -138,7 +152,7 @@ The data fields are the same among all splits.
138
 
139
  ### Social Impact of Dataset
140
 
141
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
  ### Discussion of Biases
144
 
@@ -146,13 +160,15 @@ The data fields are the same among all splits.
146
 
147
  ### Other Known Limitations
148
 
149
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
150
 
151
  ## Additional Information
152
 
153
  ### Dataset Curators
154
 
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
 
157
  ### Licensing Information
158
 
 
2
  languages:
3
  - en
4
  paperswithcode_id: reddit
5
+ pretty_name: Reddit Webis-TLDR-17
6
  ---
7
 
8
+ # Dataset Card for Reddit Webis-TLDR-17
9
 
10
  ## Table of Contents
11
  - [Dataset Description](#dataset-description)
 
33
 
34
  ## Dataset Description
35
 
36
+ - **Homepage:** [https://webis.de/data/webis-tldr-17.html](https://webis.de/data/webis-tldr-17.html)
37
+ - **Repository:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
38
+ - **Paper:** [https://aclanthology.org/W17-4508]
39
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
  - **Size of downloaded dataset files:** 2996.31 MB
41
  - **Size of the generated dataset:** 18063.11 MB
 
43
 
44
  ### Dataset Summary
45
 
46
+ This corpus contains preprocessed posts from the Reddit dataset (Webis-TLDR-17).
47
  The dataset consists of 3,848,330 posts with an average length of 270 words for content,
48
  and 28 words for the summary.
49
 
 
52
 
53
  ### Supported Tasks and Leaderboards
54
 
55
+ Summarization (abstractive)
56
+
57
+ Known ROUGE scores achieved for the Webis-TLDR-17:
58
+
59
+ | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper/Source |
60
+ |-------|-------|-------|-------|------:|
61
+ | Transformer + Copy (Gehrmann et al., 2019) | 22 | 6 | 17 | Generating Summaries with Finetuned Language Models |
62
+ | Unified VAE + PGN (Choi et al., 2019) | 19 | 4 | 15 | VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization |
63
+
64
+ (Source: https://github.com/sebastianruder/NLP-progress/blob/master/english/summarization.md)
65
 
66
  ### Languages
67
 
68
+ English
69
 
70
  ## Dataset Structure
71
 
 
113
  |-------|------:|
114
  |default|3848330|
115
 
116
+ This corpus does not contain a separate test set. Thus it is up to the users to divide the corpus into appropriate training, validation and test sets.
117
+
118
  ## Dataset Creation
119
 
120
  ### Curation Rationale
121
 
122
+ In the scope of the task of absractive summarization the creators of the Webis-TLDR-17 propose mining social media for author-provided summaries and taking advantage of the common practice of appending a "TL;DR" to long posts. A large Reddit crawl was used to yield the Webis-TLDR-17 corpus. This dataset intends to complement the existing summarization corpora primarily from the news genre.
123
 
124
  ### Source Data
125
 
126
+ Reddit subreddits posts (submissions & comments) containing "TL;DR" from 2006 to 2016. Multiple subreddits are included.
127
+
128
  #### Initial Data Collection and Normalization
129
 
130
+ Initial data: a set of 286 million submissions and 1.6 billion comments posted to Reddit between 2006 and 2016.
131
+ Then a five-step pipeline of consecutive filtering steps was applied.
132
 
133
  #### Who are the source language producers?
134
 
135
+ The contents of the dataset are produced by human authors, bot-generated content was eliminated by filtering out all bot accounts with the help of an extensive list provided by the Reddit community, as well as manual inspection of cases where the user name contained the substring "bot."
136
 
137
  ### Annotations
138
 
 
152
 
153
  ### Social Impact of Dataset
154
 
155
+ This dataset has been created to serve as a source of large-scale summarization training data. It is primarily geared towards the automatic abstractive summarization task, that can be considered one of the most challenging variants of automatic summarization. It also aims to tackle the lack of genre diversity in the summarization datasets (most are news-related).
156
 
157
  ### Discussion of Biases
158
 
 
160
 
161
  ### Other Known Limitations
162
 
163
+ Reddit users write TL;DRs with various intentions, such as providing a “true” summary, asking questions or for help, or forming judgments and conclusions. As noted in the paper introducing the dataset, although the first kind of TL;DR posts are most important for training summarization models, yet, the latter allow for various alternative summarization-related tasks.
164
+
165
+ Although filtering was performed abusive language maybe still be present.
166
 
167
  ## Additional Information
168
 
169
  ### Dataset Curators
170
 
171
+ Michael Völske, Martin Potthast, Shahbaz Syed, Benno Stein
172
 
173
  ### Licensing Information
174