Datasets:

Languages:
German
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
8db751b
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - de
8
+ licenses:
9
+ - cc-by-nc-sa-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - sentiment-classification
20
+ ---
21
+
22
+ # Dataset Card for One Million Posts Corpus
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://ofai.github.io/million-post-corpus/
50
+ - **Repository:** https://github.com/OFAI/million-post-corpus
51
+ - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711
52
+ - **Leaderboard:**
53
+ - **Point of Contact:**
54
+
55
+ ### Dataset Summary
56
+
57
+ The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language).
58
+
59
+ DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper.
60
+
61
+ The data set contains the following data for each post:
62
+
63
+ * Post ID
64
+ * Article ID
65
+ * Headline (max. 250 characters)
66
+ * Main Body (max. 750 characters)
67
+ * User ID (the user names used by the website have been re-mapped to new numeric IDs)
68
+ * Time stamp
69
+ * Parent post (replies give rise to tree-like discussion thread structures)
70
+ * Status (online or deleted by a moderator)
71
+ * Number of positive votes by other community members
72
+ * Number of negative votes by other community members
73
+
74
+ For each article, the data set contains the following data:
75
+
76
+ * Article ID
77
+ * Publishing date
78
+ * Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)
79
+ * Title
80
+ * Body
81
+
82
+ Detailed descriptions of the post selection and annotation procedures are given in the paper.
83
+
84
+ #### Annotated Categories
85
+
86
+ Potentially undesirable content:
87
+
88
+ * Sentiment (negative/neutral/positive)
89
+ An important goal is to detect changes in the prevalent sentiment in a discussion, e.g., the location within the fora and the point in time where a turn from positive/neutral sentiment to negative sentiment takes place.
90
+ * Off-Topic (yes/no)
91
+ Posts which digress too far from the topic of the corresponding article.
92
+ * Inappropriate (yes/no)
93
+ Swearwords, suggestive and obscene language, insults, threats etc.
94
+ * Discriminating (yes/no)
95
+ Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.
96
+
97
+ Neutral content that requires a reaction:
98
+
99
+ * Feedback (yes/no)
100
+ Sometimes users ask questions or give feedback to the author of the article or the newspaper in general, which may require a reply/reaction.
101
+
102
+ Potentially desirable content:
103
+
104
+ * Personal Stories (yes/no)
105
+ In certain fora, users are encouraged to share their personal stories, experiences, anecdotes etc. regarding the respective topic.
106
+ * Arguments Used (yes/no)
107
+ It is desirable for users to back their statements with rational argumentation, reasoning and sources.
108
+
109
+ ### Supported Tasks and Leaderboards
110
+
111
+ [More Information Needed]
112
+
113
+ ### Languages
114
+
115
+ Austrian German
116
+
117
+ ## Dataset Structure
118
+
119
+ ### Data Instances
120
+
121
+ [More Information Needed]
122
+
123
+ ### Data Fields
124
+
125
+ The data set contains the following data for each post:
126
+
127
+ * **ID_Post**: Post ID
128
+ * **ID_Parent_Post**: Parent post (replies give rise to tree-like discussion thread structures)
129
+ * **ID_Article**: Article ID
130
+ * **ID_User**: User ID (the user names used by the website have been re-mapped to new numeric IDs)
131
+ * **Headline**: Headline (max. 250 characters)
132
+ * **Body**: Main Body (max. 750 characters)
133
+ * **CreatedAt**: Time stamp
134
+ * **Status**: Status (online or deleted by a moderator)
135
+ * **PositiveVotes**: Number of positive votes by other community members
136
+ * **NegativeVotes**: Number of negative votes by other community members
137
+
138
+ Labeled posts also contain:
139
+
140
+ * **Category**: The category of the annotation, one of: ArgumentsUsed, Discriminating, Inappropriate, OffTopic, PersonalStories, PossiblyFeedback, SentimentNegative, SentimentNeutral, SentimentPositive
141
+ * **Value**: either 0 or 1, explicitly indicating whether or not the post has the specified category as a label (i.e. a category of `ArgumentsUsed` with value of `0` means that an annotator explicitly labeled that this post doesn't use arguments, as opposed to the mere absence of a positive label).
142
+ * **Fold**: a number between [0-9] from a 10-fold split by the authors
143
+
144
+ For each article, the data set contains the following data:
145
+
146
+ * **ID_Article**: Article ID
147
+ * **publishingDate**: Publishing date
148
+ * **Path**: Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)
149
+ * **Title**: Title
150
+ * **Body**: Body
151
+
152
+ ### Data Splits
153
+
154
+ [More Information Needed]
155
+
156
+ ## Dataset Creation
157
+
158
+ ### Curation Rationale
159
+
160
+ [More Information Needed]
161
+
162
+ ### Source Data
163
+
164
+ #### Initial Data Collection and Normalization
165
+
166
+ [More Information Needed]
167
+
168
+ #### Who are the source language producers?
169
+
170
+ [More Information Needed]
171
+
172
+ ### Annotations
173
+
174
+ #### Annotation process
175
+
176
+ [More Information Needed]
177
+
178
+ #### Who are the annotators?
179
+
180
+ [More Information Needed]
181
+
182
+ ### Personal and Sensitive Information
183
+
184
+ [More Information Needed]
185
+
186
+ ## Considerations for Using the Data
187
+
188
+ ### Social Impact of Dataset
189
+
190
+ [More Information Needed]
191
+
192
+ ### Discussion of Biases
193
+
194
+ [More Information Needed]
195
+
196
+ ### Other Known Limitations
197
+
198
+ [More Information Needed]
199
+
200
+ ## Additional Information
201
+
202
+ ### Dataset Curators
203
+
204
+ [More Information Needed]
205
+
206
+ ### Licensing Information
207
+
208
+ This data set is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
209
+
210
+ ### Citation Information
211
+
212
+ ```
213
+ @InProceedings{Schabus2018,
214
+ author = {Dietmar Schabus and Marcin Skowron},
215
+ title = {Academic-Industrial Perspective on the Development and Deployment of a Moderation System for a Newspaper Website},
216
+ booktitle = {Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC)},
217
+ year = {2018},
218
+ address = {Miyazaki, Japan},
219
+ month = may,
220
+ pages = {1602-1605},
221
+ abstract = {This paper describes an approach and our experiences from the development, deployment and usability testing of a Natural Language Processing (NLP) and Information Retrieval system that supports the moderation of user comments on a large newspaper website. We highlight some of the differences between industry-oriented and academic research settings and their influence on the decisions made in the data collection and annotation processes, selection of document representation and machine learning methods. We report on classification results, where the problems to solve and the data to work with come from a commercial enterprise. In this context typical for NLP research, we discuss relevant industrial aspects. We believe that the challenges faced as well as the solutions proposed for addressing them can provide insights to others working in a similar setting.},
222
+ url = {http://www.lrec-conf.org/proceedings/lrec2018/summaries/8885.html},
223
+ }
224
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"posts_labeled": {"description": "The \u201cOne Million Posts\u201d corpus is an annotated data set consisting of\nuser comments posted to an Austrian newspaper website (in German language).\n\nDER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper\u2019s website,\nthere is a discussion section below each news article where readers engage in\nonline discussions. The data set contains a selection of user posts from the\n12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and\n1,000,000 unlabeled posts in the data set. The labeled posts were annotated by\nprofessional forum moderators employed by the newspaper.\n\nThe data set contains the following data for each post:\n\n* Post ID\n* Article ID\n* Headline (max. 250 characters)\n* Main Body (max. 750 characters)\n* User ID (the user names used by the website have been re-mapped to new numeric IDs)\n* Time stamp\n* Parent post (replies give rise to tree-like discussion thread structures)\n* Status (online or deleted by a moderator)\n* Number of positive votes by other community members\n* Number of negative votes by other community members\n\nFor each article, the data set contains the following data:\n\n* Article ID\n* Publishing date\n* Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)\n* Title\n* Body\n\nDetailed descriptions of the post selection and annotation procedures are given in the paper.\n\n## Annotated Categories\n\nPotentially undesirable content:\n\n* Sentiment (negative/neutral/positive)\n An important goal is to detect changes in the prevalent sentiment in a discussion, e.g.,\n the location within the fora and the point in time where a turn from positive/neutral\n sentiment to negative sentiment takes place.\n* Off-Topic (yes/no)\n Posts which digress too far from the topic of the corresponding article.\n* Inappropriate (yes/no)\n Swearwords, suggestive and obscene language, insults, threats etc.\n* Discriminating (yes/no)\n Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.\n\nNeutral content that requires a reaction:\n\n* Feedback (yes/no)\n Sometimes users ask questions or give feedback to the author of the article or the\n newspaper in general, which may require a reply/reaction.\n\nPotentially desirable content:\n\n* Personal Stories (yes/no)\n In certain fora, users are encouraged to share their personal stories, experiences,\n anecdotes etc. regarding the respective topic.\n* Arguments Used (yes/no)\n It is desirable for users to back their statements with rational argumentation,\n reasoning and sources.\n", "citation": "@InProceedings{Schabus2017,\n Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp},\n Title = {One Million Posts: A Data Set of German Online Discussions},\n Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},\n Pages = {1241--1244},\n Year = {2017},\n Address = {Tokyo, Japan},\n Doi = {10.1145/3077136.3080711},\n Month = aug\n}\n", "homepage": "https://ofai.github.io/million-post-corpus/", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License", "features": {"ID_Post": {"dtype": "string", "id": null, "_type": "Value"}, "ID_Parent_Post": {"dtype": "string", "id": null, "_type": "Value"}, "ID_Article": {"dtype": "string", "id": null, "_type": "Value"}, "ID_User": {"dtype": "string", "id": null, "_type": "Value"}, "CreatedAt": {"dtype": "string", "id": null, "_type": "Value"}, "Status": {"dtype": "string", "id": null, "_type": "Value"}, "Headline": {"dtype": "string", "id": null, "_type": "Value"}, "Body": {"dtype": "string", "id": null, "_type": "Value"}, "PositiveVotes": {"dtype": "int32", "id": null, "_type": "Value"}, "NegativeVotes": {"dtype": "int32", "id": null, "_type": "Value"}, "Category": {"num_classes": 9, "names": ["ArgumentsUsed", "Discriminating", "Inappropriate", "OffTopic", "PersonalStories", "PossiblyFeedback", "SentimentNegative", "SentimentNeutral", "SentimentPositive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Value": {"dtype": "int32", "id": null, "_type": "Value"}, "Fold": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "omp", "config_name": "posts_labeled", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 13955964, "num_examples": 40567, "dataset_name": "omp"}}, "download_checksums": {"https://github.com/aseifert/million-post-corpus/raw/master/data/posts_labeled.csv.xz": {"num_bytes": 1329892, "checksum": "2d1cb6cd8fec07c5d378f9be889b355c1eb23b30e8fde2dcbb073cdad6f472ad"}}, "download_size": 1329892, "post_processing_size": null, "dataset_size": 13955964, "size_in_bytes": 15285856}, "posts_unlabeled": {"description": "The \u201cOne Million Posts\u201d corpus is an annotated data set consisting of\nuser comments posted to an Austrian newspaper website (in German language).\n\nDER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper\u2019s website,\nthere is a discussion section below each news article where readers engage in\nonline discussions. The data set contains a selection of user posts from the\n12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and\n1,000,000 unlabeled posts in the data set. The labeled posts were annotated by\nprofessional forum moderators employed by the newspaper.\n\nThe data set contains the following data for each post:\n\n* Post ID\n* Article ID\n* Headline (max. 250 characters)\n* Main Body (max. 750 characters)\n* User ID (the user names used by the website have been re-mapped to new numeric IDs)\n* Time stamp\n* Parent post (replies give rise to tree-like discussion thread structures)\n* Status (online or deleted by a moderator)\n* Number of positive votes by other community members\n* Number of negative votes by other community members\n\nFor each article, the data set contains the following data:\n\n* Article ID\n* Publishing date\n* Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)\n* Title\n* Body\n\nDetailed descriptions of the post selection and annotation procedures are given in the paper.\n\n## Annotated Categories\n\nPotentially undesirable content:\n\n* Sentiment (negative/neutral/positive)\n An important goal is to detect changes in the prevalent sentiment in a discussion, e.g.,\n the location within the fora and the point in time where a turn from positive/neutral\n sentiment to negative sentiment takes place.\n* Off-Topic (yes/no)\n Posts which digress too far from the topic of the corresponding article.\n* Inappropriate (yes/no)\n Swearwords, suggestive and obscene language, insults, threats etc.\n* Discriminating (yes/no)\n Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.\n\nNeutral content that requires a reaction:\n\n* Feedback (yes/no)\n Sometimes users ask questions or give feedback to the author of the article or the\n newspaper in general, which may require a reply/reaction.\n\nPotentially desirable content:\n\n* Personal Stories (yes/no)\n In certain fora, users are encouraged to share their personal stories, experiences,\n anecdotes etc. regarding the respective topic.\n* Arguments Used (yes/no)\n It is desirable for users to back their statements with rational argumentation,\n reasoning and sources.\n", "citation": "@InProceedings{Schabus2017,\n Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp},\n Title = {One Million Posts: A Data Set of German Online Discussions},\n Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},\n Pages = {1241--1244},\n Year = {2017},\n Address = {Tokyo, Japan},\n Doi = {10.1145/3077136.3080711},\n Month = aug\n}\n", "homepage": "https://ofai.github.io/million-post-corpus/", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License", "features": {"ID_Post": {"dtype": "string", "id": null, "_type": "Value"}, "ID_Parent_Post": {"dtype": "string", "id": null, "_type": "Value"}, "ID_Article": {"dtype": "string", "id": null, "_type": "Value"}, "ID_User": {"dtype": "string", "id": null, "_type": "Value"}, "CreatedAt": {"dtype": "string", "id": null, "_type": "Value"}, "Status": {"dtype": "string", "id": null, "_type": "Value"}, "Headline": {"dtype": "string", "id": null, "_type": "Value"}, "Body": {"dtype": "string", "id": null, "_type": "Value"}, "PositiveVotes": {"dtype": "int32", "id": null, "_type": "Value"}, "NegativeVotes": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "omp", "config_name": "posts_unlabeled", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 305770324, "num_examples": 1000000, "dataset_name": "omp"}}, "download_checksums": {"https://github.com/aseifert/million-post-corpus/raw/master/data/posts_unlabeled.csv.xz": {"num_bytes": 79296188, "checksum": "433e80787abf587ecbd54756b3b200b57b7ef31041f19ba0bb8d2c5cc39cad65"}}, "download_size": 79296188, "post_processing_size": null, "dataset_size": 305770324, "size_in_bytes": 385066512}, "articles": {"description": "The \u201cOne Million Posts\u201d corpus is an annotated data set consisting of\nuser comments posted to an Austrian newspaper website (in German language).\n\nDER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper\u2019s website,\nthere is a discussion section below each news article where readers engage in\nonline discussions. The data set contains a selection of user posts from the\n12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and\n1,000,000 unlabeled posts in the data set. The labeled posts were annotated by\nprofessional forum moderators employed by the newspaper.\n\nThe data set contains the following data for each post:\n\n* Post ID\n* Article ID\n* Headline (max. 250 characters)\n* Main Body (max. 750 characters)\n* User ID (the user names used by the website have been re-mapped to new numeric IDs)\n* Time stamp\n* Parent post (replies give rise to tree-like discussion thread structures)\n* Status (online or deleted by a moderator)\n* Number of positive votes by other community members\n* Number of negative votes by other community members\n\nFor each article, the data set contains the following data:\n\n* Article ID\n* Publishing date\n* Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)\n* Title\n* Body\n\nDetailed descriptions of the post selection and annotation procedures are given in the paper.\n\n## Annotated Categories\n\nPotentially undesirable content:\n\n* Sentiment (negative/neutral/positive)\n An important goal is to detect changes in the prevalent sentiment in a discussion, e.g.,\n the location within the fora and the point in time where a turn from positive/neutral\n sentiment to negative sentiment takes place.\n* Off-Topic (yes/no)\n Posts which digress too far from the topic of the corresponding article.\n* Inappropriate (yes/no)\n Swearwords, suggestive and obscene language, insults, threats etc.\n* Discriminating (yes/no)\n Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.\n\nNeutral content that requires a reaction:\n\n* Feedback (yes/no)\n Sometimes users ask questions or give feedback to the author of the article or the\n newspaper in general, which may require a reply/reaction.\n\nPotentially desirable content:\n\n* Personal Stories (yes/no)\n In certain fora, users are encouraged to share their personal stories, experiences,\n anecdotes etc. regarding the respective topic.\n* Arguments Used (yes/no)\n It is desirable for users to back their statements with rational argumentation,\n reasoning and sources.\n", "citation": "@InProceedings{Schabus2017,\n Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp},\n Title = {One Million Posts: A Data Set of German Online Discussions},\n Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},\n Pages = {1241--1244},\n Year = {2017},\n Address = {Tokyo, Japan},\n Doi = {10.1145/3077136.3080711},\n Month = aug\n}\n", "homepage": "https://ofai.github.io/million-post-corpus/", "license": "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License", "features": {"ID_Article": {"dtype": "string", "id": null, "_type": "Value"}, "Path": {"dtype": "string", "id": null, "_type": "Value"}, "publishingDate": {"dtype": "string", "id": null, "_type": "Value"}, "Title": {"dtype": "string", "id": null, "_type": "Value"}, "Body": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "omp", "config_name": "articles", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 43529400, "num_examples": 12087, "dataset_name": "omp"}}, "download_checksums": {"https://github.com/aseifert/million-post-corpus/raw/master/data/articles.csv.xz": {"num_bytes": 10681288, "checksum": "ff707a8adddd0f8785c7668b051d01b69cfac696db89afbd46054656f909a479"}}, "download_size": 10681288, "post_processing_size": null, "dataset_size": 43529400, "size_in_bytes": 54210688}}
dummy/articles/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa1caecd33456774c75815fc73242dbbea9d05308eeadfc20257837c37d0d237
3
+ size 4344
dummy/posts_labeled/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24dc97354b469f4e7c2a5b9a9ec365cdc10c95629dcf208e6f1aeb5c0d7ee780
3
+ size 1143
dummy/posts_unlabeled/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40de6f8997a1129eff16c554949c391011d84026a3c28c24aec7ad74376055b1
3
+ size 1575
omp.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The “One Million Posts” corpus is an annotated data set consisting of
16
+ user comments posted to an Austrian newspaper website (in German language)."""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ from pathlib import Path
21
+
22
+ import pandas as pd
23
+
24
+ import datasets
25
+
26
+
27
+ _CITATION = """\
28
+ @InProceedings{Schabus2017,
29
+ Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp},
30
+ Title = {One Million Posts: A Data Set of German Online Discussions},
31
+ Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},
32
+ Pages = {1241--1244},
33
+ Year = {2017},
34
+ Address = {Tokyo, Japan},
35
+ Doi = {10.1145/3077136.3080711},
36
+ Month = aug
37
+ }
38
+ """
39
+
40
+ _DESCRIPTION = """\
41
+ The “One Million Posts” corpus is an annotated data set consisting of
42
+ user comments posted to an Austrian newspaper website (in German language).
43
+
44
+ DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website,
45
+ there is a discussion section below each news article where readers engage in
46
+ online discussions. The data set contains a selection of user posts from the
47
+ 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and
48
+ 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by
49
+ professional forum moderators employed by the newspaper.
50
+
51
+ The data set contains the following data for each post:
52
+
53
+ * Post ID
54
+ * Article ID
55
+ * Headline (max. 250 characters)
56
+ * Main Body (max. 750 characters)
57
+ * User ID (the user names used by the website have been re-mapped to new numeric IDs)
58
+ * Time stamp
59
+ * Parent post (replies give rise to tree-like discussion thread structures)
60
+ * Status (online or deleted by a moderator)
61
+ * Number of positive votes by other community members
62
+ * Number of negative votes by other community members
63
+
64
+ For each article, the data set contains the following data:
65
+
66
+ * Article ID
67
+ * Publishing date
68
+ * Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)
69
+ * Title
70
+ * Body
71
+
72
+ Detailed descriptions of the post selection and annotation procedures are given in the paper.
73
+
74
+ ## Annotated Categories
75
+
76
+ Potentially undesirable content:
77
+
78
+ * Sentiment (negative/neutral/positive)
79
+ An important goal is to detect changes in the prevalent sentiment in a discussion, e.g.,
80
+ the location within the fora and the point in time where a turn from positive/neutral
81
+ sentiment to negative sentiment takes place.
82
+ * Off-Topic (yes/no)
83
+ Posts which digress too far from the topic of the corresponding article.
84
+ * Inappropriate (yes/no)
85
+ Swearwords, suggestive and obscene language, insults, threats etc.
86
+ * Discriminating (yes/no)
87
+ Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.
88
+
89
+ Neutral content that requires a reaction:
90
+
91
+ * Feedback (yes/no)
92
+ Sometimes users ask questions or give feedback to the author of the article or the
93
+ newspaper in general, which may require a reply/reaction.
94
+
95
+ Potentially desirable content:
96
+
97
+ * Personal Stories (yes/no)
98
+ In certain fora, users are encouraged to share their personal stories, experiences,
99
+ anecdotes etc. regarding the respective topic.
100
+ * Arguments Used (yes/no)
101
+ It is desirable for users to back their statements with rational argumentation,
102
+ reasoning and sources.
103
+ """
104
+
105
+ _HOMEPAGE = "https://ofai.github.io/million-post-corpus/"
106
+
107
+ _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License"
108
+
109
+ _URLs = {
110
+ "posts_labeled": "https://github.com/aseifert/million-post-corpus/raw/master/data/posts_labeled.csv.xz",
111
+ "posts_unlabeled": "https://github.com/aseifert/million-post-corpus/raw/master/data/posts_unlabeled.csv.xz",
112
+ "articles": "https://github.com/aseifert/million-post-corpus/raw/master/data/articles.csv.xz",
113
+ }
114
+
115
+
116
+ class Omp(datasets.GeneratorBasedBuilder):
117
+ """The “One Million Posts” corpus is an annotated data set consisting of user comments
118
+ posted to an Austrian newspaper website (in German language). Annotated categories include:
119
+ sentiment (negative/neutral/positive), off-topic (yes/no), inappropriate (yes/no),
120
+ discriminating (yes/no), feedback (yes/no), personal story (yes/no), arguments used (yes/no)."""
121
+
122
+ VERSION = datasets.Version("1.1.0")
123
+
124
+ BUILDER_CONFIGS = [
125
+ datasets.BuilderConfig(
126
+ name="posts_labeled",
127
+ version=VERSION,
128
+ description="This part of the dataset includes labeled posts (11,773 annotated posts)",
129
+ ),
130
+ datasets.BuilderConfig(
131
+ name="posts_unlabeled",
132
+ version=VERSION,
133
+ description="This part of the dataset includes unlabeled posts (1,000,000)",
134
+ ),
135
+ datasets.BuilderConfig(
136
+ name="articles",
137
+ version=VERSION,
138
+ description="This part of the dataset includes the articles that the comments were posted to (~12k)",
139
+ ),
140
+ ]
141
+
142
+ DEFAULT_CONFIG_NAME = (
143
+ "posts_labeled" # It's not mandatory to have a default configuration. Just use one if it make sense.
144
+ )
145
+
146
+ def _info(self):
147
+ if self.config.name == "posts_labeled":
148
+ features = datasets.Features(
149
+ {
150
+ "ID_Post": datasets.Value("string"),
151
+ "ID_Parent_Post": datasets.Value("string"),
152
+ "ID_Article": datasets.Value("string"),
153
+ "ID_User": datasets.Value("string"),
154
+ "CreatedAt": datasets.Value("string"),
155
+ "Status": datasets.Value("string"),
156
+ "Headline": datasets.Value("string"),
157
+ "Body": datasets.Value("string"),
158
+ "PositiveVotes": datasets.Value("int32"),
159
+ "NegativeVotes": datasets.Value("int32"),
160
+ "Category": datasets.features.ClassLabel(
161
+ names=[
162
+ "ArgumentsUsed",
163
+ "Discriminating",
164
+ "Inappropriate",
165
+ "OffTopic",
166
+ "PersonalStories",
167
+ "PossiblyFeedback",
168
+ "SentimentNegative",
169
+ "SentimentNeutral",
170
+ "SentimentPositive",
171
+ ]
172
+ ),
173
+ "Value": datasets.Value("int32"),
174
+ "Fold": datasets.Value("int32"),
175
+ }
176
+ )
177
+ elif self.config.name == "posts_unlabeled":
178
+ features = datasets.Features(
179
+ {
180
+ "ID_Post": datasets.Value("string"),
181
+ "ID_Parent_Post": datasets.Value("string"),
182
+ "ID_Article": datasets.Value("string"),
183
+ "ID_User": datasets.Value("string"),
184
+ "CreatedAt": datasets.Value("string"),
185
+ "Status": datasets.Value("string"),
186
+ "Headline": datasets.Value("string"),
187
+ "Body": datasets.Value("string"),
188
+ "PositiveVotes": datasets.Value("int32"),
189
+ "NegativeVotes": datasets.Value("int32"),
190
+ }
191
+ )
192
+ elif self.config.name == "articles":
193
+ features = datasets.Features(
194
+ {
195
+ "ID_Article": datasets.Value("string"),
196
+ "Path": datasets.Value("string"),
197
+ "publishingDate": datasets.Value("string"),
198
+ "Title": datasets.Value("string"),
199
+ "Body": datasets.Value("string"),
200
+ }
201
+ )
202
+ else:
203
+ assert False
204
+
205
+ return datasets.DatasetInfo(
206
+ # This is the description that will appear on the datasets page.
207
+ description=_DESCRIPTION,
208
+ # This defines the different columns of the dataset and their types
209
+ features=features, # Here we define them above because they are different between the two configurations
210
+ # If there's a common (input, target) tuple from the features,
211
+ # specify them here. They'll be used if as_supervised=True in
212
+ # builder.as_dataset.
213
+ supervised_keys=None,
214
+ # Homepage of the dataset for documentation
215
+ homepage=_HOMEPAGE,
216
+ # License for the dataset if available
217
+ license=_LICENSE,
218
+ # Citation for the dataset
219
+ citation=_CITATION,
220
+ )
221
+
222
+ def _split_generators(self, dl_manager):
223
+ """Returns SplitGenerators."""
224
+
225
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
226
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
227
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
228
+ my_urls = _URLs[self.config.name]
229
+ data_path = Path(dl_manager.download_and_extract(my_urls))
230
+ if data_path.is_dir():
231
+ if self.config.name == "posts_labeled":
232
+ fname = "posts_labeled.csv.gz"
233
+ elif self.config.name == "posts_unlabeled":
234
+ fname = "posts_unlabeled.csv.gz"
235
+ elif self.config.name == "articles":
236
+ fname = "articles.csv.gz"
237
+ else:
238
+ assert False
239
+ data_path = data_path / fname
240
+
241
+ return [
242
+ datasets.SplitGenerator(
243
+ name=datasets.Split.TRAIN,
244
+ # These kwargs will be passed to _generate_examples
245
+ gen_kwargs={"filepath": str(data_path), "split": "train"},
246
+ ),
247
+ ]
248
+
249
+ def _generate_examples(self, filepath, split):
250
+ """ Yields examples. """
251
+
252
+ if self.config.name in ["posts_labeled", "posts_unlabeled"]:
253
+ posts_labeled = pd.read_csv(
254
+ filepath,
255
+ compression=None,
256
+ dtype={"ID_Post": str, "ID_Parent_Post": str, "ID_Article": str, "ID_User": str},
257
+ )
258
+ posts_labeled.fillna("", inplace=True)
259
+ for i, row in posts_labeled.iterrows():
260
+ yield row["ID_Post"], row.to_dict()
261
+ elif self.config.name == "articles":
262
+ posts_labeled = pd.read_csv(
263
+ filepath,
264
+ compression=None,
265
+ dtype={"ID_Article": str, "Path": str, "publishingDate": str, "ID_User": str},
266
+ )
267
+ posts_labeled.fillna("", inplace=True)
268
+ for i, row in posts_labeled.iterrows():
269
+ yield row["ID_Article"], row.to_dict()