system HF staff commited on
Commit
1ceed21
1 Parent(s): 7d7bf66

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +171 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "reddit"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/webis-de/webis-tldr-17-corpus](https://github.com/webis-de/webis-tldr-17-corpus)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 2996.31 MB
37
+ - **Size of the generated dataset:** 18063.11 MB
38
+ - **Total amount of disk used:** 21059.41 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ This corpus contains preprocessed posts from the Reddit dataset.
43
+ The dataset consists of 3,848,330 posts with an average length of 270 words for content,
44
+ and 28 words for the summary.
45
+
46
+ Features includes strings: author, body, normalizedBody, content, summary, subreddit, subreddit_id.
47
+ Content is used as document and summary is used as summary.
48
+
49
+ ### [Supported Tasks](#supported-tasks)
50
+
51
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
52
+
53
+ ### [Languages](#languages)
54
+
55
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+
57
+ ## [Dataset Structure](#dataset-structure)
58
+
59
+ We show detailed information for up to 5 configurations of the dataset.
60
+
61
+ ### [Data Instances](#data-instances)
62
+
63
+ #### default
64
+
65
+ - **Size of downloaded dataset files:** 2996.31 MB
66
+ - **Size of the generated dataset:** 18063.11 MB
67
+ - **Total amount of disk used:** 21059.41 MB
68
+
69
+ An example of 'train' looks as follows.
70
+ ```
71
+ {
72
+ "author": "me",
73
+ "body": "<>",
74
+ "content": "input document.",
75
+ "id": "1",
76
+ "normalizedBody": "",
77
+ "subreddit": "machinelearning",
78
+ "subreddit_id": "2",
79
+ "summary": "output summary."
80
+ }
81
+ ```
82
+
83
+ ### [Data Fields](#data-fields)
84
+
85
+ The data fields are the same among all splits.
86
+
87
+ #### default
88
+ - `author`: a `string` feature.
89
+ - `body`: a `string` feature.
90
+ - `normalizedBody`: a `string` feature.
91
+ - `subreddit`: a `string` feature.
92
+ - `subreddit_id`: a `string` feature.
93
+ - `id`: a `string` feature.
94
+ - `content`: a `string` feature.
95
+ - `summary`: a `string` feature.
96
+
97
+ ### [Data Splits Sample Size](#data-splits-sample-size)
98
+
99
+ | name | train |
100
+ |-------|------:|
101
+ |default|3848330|
102
+
103
+ ## [Dataset Creation](#dataset-creation)
104
+
105
+ ### [Curation Rationale](#curation-rationale)
106
+
107
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
108
+
109
+ ### [Source Data](#source-data)
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ ### [Annotations](#annotations)
114
+
115
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
+
117
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
118
+
119
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
+
121
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
122
+
123
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### [Discussion of Biases](#discussion-of-biases)
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ### [Other Known Limitations](#other-known-limitations)
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ## [Additional Information](#additional-information)
136
+
137
+ ### [Dataset Curators](#dataset-curators)
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ### [Licensing Information](#licensing-information)
142
+
143
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
144
+
145
+ ### [Citation Information](#citation-information)
146
+
147
+ ```
148
+
149
+ @inproceedings{volske-etal-2017-tl,
150
+ title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
151
+ author = {V{"o}lske, Michael and
152
+ Potthast, Martin and
153
+ Syed, Shahbaz and
154
+ Stein, Benno},
155
+ booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
156
+ month = sep,
157
+ year = "2017",
158
+ address = "Copenhagen, Denmark",
159
+ publisher = "Association for Computational Linguistics",
160
+ url = "https://www.aclweb.org/anthology/W17-4508",
161
+ doi = "10.18653/v1/W17-4508",
162
+ pages = "59--63",
163
+ abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
164
+ }
165
+
166
+ ```
167
+
168
+
169
+ ### Contributions
170
+
171
+ Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.