Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
e78e799
1 Parent(s): b78b02e

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,342 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - unknown
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 1K<n<10K
14
- source_datasets:
15
- - extended|other-MPQA-KBP Challenge-MediaRank
16
- task_categories:
17
- - text-classification
18
- task_ids:
19
- - sentiment-classification
20
- paperswithcode_id: persent
21
- pretty_name: PerSenT
22
- dataset_info:
23
- features:
24
- - name: DOCUMENT_INDEX
25
- dtype: int64
26
- - name: TITLE
27
- dtype: string
28
- - name: TARGET_ENTITY
29
- dtype: string
30
- - name: DOCUMENT
31
- dtype: string
32
- - name: MASKED_DOCUMENT
33
- dtype: string
34
- - name: TRUE_SENTIMENT
35
- dtype:
36
- class_label:
37
- names:
38
- 0: Negative
39
- 1: Neutral
40
- 2: Positive
41
- - name: Paragraph0
42
- dtype:
43
- class_label:
44
- names:
45
- 0: Negative
46
- 1: Neutral
47
- 2: Positive
48
- - name: Paragraph1
49
- dtype:
50
- class_label:
51
- names:
52
- 0: Negative
53
- 1: Neutral
54
- 2: Positive
55
- - name: Paragraph2
56
- dtype:
57
- class_label:
58
- names:
59
- 0: Negative
60
- 1: Neutral
61
- 2: Positive
62
- - name: Paragraph3
63
- dtype:
64
- class_label:
65
- names:
66
- 0: Negative
67
- 1: Neutral
68
- 2: Positive
69
- - name: Paragraph4
70
- dtype:
71
- class_label:
72
- names:
73
- 0: Negative
74
- 1: Neutral
75
- 2: Positive
76
- - name: Paragraph5
77
- dtype:
78
- class_label:
79
- names:
80
- 0: Negative
81
- 1: Neutral
82
- 2: Positive
83
- - name: Paragraph6
84
- dtype:
85
- class_label:
86
- names:
87
- 0: Negative
88
- 1: Neutral
89
- 2: Positive
90
- - name: Paragraph7
91
- dtype:
92
- class_label:
93
- names:
94
- 0: Negative
95
- 1: Neutral
96
- 2: Positive
97
- - name: Paragraph8
98
- dtype:
99
- class_label:
100
- names:
101
- 0: Negative
102
- 1: Neutral
103
- 2: Positive
104
- - name: Paragraph9
105
- dtype:
106
- class_label:
107
- names:
108
- 0: Negative
109
- 1: Neutral
110
- 2: Positive
111
- - name: Paragraph10
112
- dtype:
113
- class_label:
114
- names:
115
- 0: Negative
116
- 1: Neutral
117
- 2: Positive
118
- - name: Paragraph11
119
- dtype:
120
- class_label:
121
- names:
122
- 0: Negative
123
- 1: Neutral
124
- 2: Positive
125
- - name: Paragraph12
126
- dtype:
127
- class_label:
128
- names:
129
- 0: Negative
130
- 1: Neutral
131
- 2: Positive
132
- - name: Paragraph13
133
- dtype:
134
- class_label:
135
- names:
136
- 0: Negative
137
- 1: Neutral
138
- 2: Positive
139
- - name: Paragraph14
140
- dtype:
141
- class_label:
142
- names:
143
- 0: Negative
144
- 1: Neutral
145
- 2: Positive
146
- - name: Paragraph15
147
- dtype:
148
- class_label:
149
- names:
150
- 0: Negative
151
- 1: Neutral
152
- 2: Positive
153
- splits:
154
- - name: train
155
- num_bytes: 14595163
156
- num_examples: 3355
157
- - name: test_random
158
- num_bytes: 2629500
159
- num_examples: 579
160
- - name: test_fixed
161
- num_bytes: 3881800
162
- num_examples: 827
163
- - name: validation
164
- num_bytes: 2322922
165
- num_examples: 578
166
- download_size: 23117196
167
- dataset_size: 23429385
168
- ---
169
-
170
- # Dataset Card for PerSenT
171
-
172
- ## Table of Contents
173
- - [Dataset Description](#dataset-description)
174
- - [Dataset Summary](#dataset-summary)
175
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
176
- - [Languages](#languages)
177
- - [Dataset Structure](#dataset-structure)
178
- - [Data Instances](#data-instances)
179
- - [Data Fields](#data-fields)
180
- - [Data Splits](#data-splits)
181
- - [Dataset Creation](#dataset-creation)
182
- - [Curation Rationale](#curation-rationale)
183
- - [Source Data](#source-data)
184
- - [Annotations](#annotations)
185
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
186
- - [Considerations for Using the Data](#considerations-for-using-the-data)
187
- - [Social Impact of Dataset](#social-impact-of-dataset)
188
- - [Discussion of Biases](#discussion-of-biases)
189
- - [Other Known Limitations](#other-known-limitations)
190
- - [Additional Information](#additional-information)
191
- - [Dataset Curators](#dataset-curators)
192
- - [Licensing Information](#licensing-information)
193
- - [Citation Information](#citation-information)
194
- - [Contributions](#contributions)
195
-
196
- ## Dataset Description
197
-
198
- - **Homepage:** [PerSenT](https://stonybrooknlp.github.io/PerSenT/)
199
- - **Repository:** [https://github.com/MHDBST/PerSenT](https://github.com/MHDBST/PerSenT)
200
- - **Paper:** [arXiv](https://arxiv.org/abs/2011.06128)
201
- - **Leaderboard:** NA
202
- - **Point of Contact:** [Mohaddeseh Bastan](mbastan@cs.stonybrook.edu)
203
-
204
- ### Dataset Summary
205
-
206
- PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities. For each article, annotators judge what the author’s sentiment is towards the main
207
- (target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
208
-
209
- ### Supported Tasks and Leaderboards
210
-
211
- Sentiment Classification: Each document consists of multiple paragraphs. Each paragraph is labeled separately (Positive, Neutral, Negative) and the author’s sentiment towards the whole document is included as a document-level label.
212
-
213
- ### Languages
214
-
215
- English
216
-
217
- ## Dataset Structure
218
-
219
- ### Data Instances
220
-
221
- ```json
222
- {'DOCUMENT': "Germany's Landesbank Baden Wuertemberg won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n The bank was several state-owned German institutions to run into trouble last year after it ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of the bank are also being investigated by German authorities for risking or damaging the bank's capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of the bank and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that the bank would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from the bank's shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
223
- 'DOCUMENT_INDEX': 1,
224
- 'MASKED_DOCUMENT': "[TGT] won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n [TGT] was several state-owned German institutions to run into trouble last year after [TGT] ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of [TGT] are also being investigated by German authorities for risking or damaging [TGT]'s capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of [TGT] and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that [TGT] would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from [TGT]'s shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
225
- 'Paragraph0': 2,
226
- 'Paragraph1': 0,
227
- 'Paragraph10': -1,
228
- 'Paragraph11': -1,
229
- 'Paragraph12': -1,
230
- 'Paragraph13': -1,
231
- 'Paragraph14': -1,
232
- 'Paragraph15': -1,
233
- 'Paragraph2': 0,
234
- 'Paragraph3': 1,
235
- 'Paragraph4': 1,
236
- 'Paragraph5': -1,
237
- 'Paragraph6': -1,
238
- 'Paragraph7': -1,
239
- 'Paragraph8': -1,
240
- 'Paragraph9': -1,
241
- 'TARGET_ENTITY': 'Landesbank Baden Wuertemberg',
242
- 'TITLE': 'German bank LBBW wins EU bailout approval',
243
- 'TRUE_SENTIMENT': 0}
244
- ```
245
-
246
- ### Data Fields
247
-
248
- - DOCUMENT_INDEX: ID of the document per original dataset
249
- - TITLE: Title of the article
250
- - DOCUMENT: Text of the article
251
- - MASKED_DOCUMENT: Text of the article with the target entity masked with `[TGT]` token
252
- - TARGET_ENTITY: The entity that the author is expressing opinion about
253
- - TRUE_SENTIMENT: Label for entire article
254
- - Paragraph{0..15}: Label for each paragraph in the article
255
-
256
- **Note**: Labels are one of `[Negative, Neutral, Positive]`. Missing labels were replaced with `-1`.
257
-
258
- ### Data Splits
259
-
260
- To split the dataset, entities were split into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, these were moved them to a separate test collection. The remaining was split into a training, dev, and test sets at random. Thus the collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent).
261
-
262
- ## Dataset Creation
263
-
264
- ### Curation Rationale
265
-
266
- [More Information Needed]
267
-
268
- ### Source Data
269
-
270
- #### Initial Data Collection and Normalization
271
-
272
- Articles were selected from 3 sources:
273
- 1. MPQA (Deng and Wiebe, 2015; Wiebe et al., 2005): This dataset contains news articles manually annotated for opinions, beliefs, emotions, sentiments, speculations, etc. It also has target annotations which are entities and event anchored to the heads of noun or verb phrases. All decisions on this dataset are made on sentence-level and over short spans.
274
- 2. KBP Challenge (Ellis et al., 2014): This resource contains TAC 2014 KBP English sentiment slot filling challenge dataset. This is a document-level sentiment filling dataset. In this task, given an entity and a sentiment (positive/negative) from the document, the goal is to find entities toward which
275
- the original entity holds the given sentimental view. We selected documents from this resource which have been used in the following similar work in sentiment analysis task (Choi et al., 2016).
276
- 3. Media Rank (Ye and Skiena, 2019): This dataset ranks about 50k news sources along different aspects. It is also used for classifying political ideology of news articles (Kulkarni et al., 2018).
277
-
278
- Pre-processing steps:
279
- - First we find all the person entities in each article, using Stanford NER (Name Entity Resolution) tagger (Finkel et al., 2005) and all mentions of them using co-reference resolution (Clark and Manning, 2016; Co, 2017).
280
- - We removed articles which are not likely to have a main entity of focus. We used a simple heuristic of removing articles in which the most frequent person entity is mentioned only three times or less (even when counting co-referent mentions).
281
- - For the articles that remain we deemed the most frequent entity to be the main entity of the article. We also filtered out extremely long and extremely short articles to keep the articles which have at least 3 paragraphs and at most 16 paragraphs.
282
-
283
- Documents are randomly separated into train, dev, and two test sets. We ensure that each entity appears in only one of the sets. Our goal here is to avoid easy to learn biases over entities. To avoid the most frequent entities from dominating the training or the test sets, we remove articles that covered the most frequent entities and use them as a separate test set (referred to as frequent test set) in addition to the randomly drawn standard test set.
284
-
285
- ### Annotations
286
-
287
- #### Annotation process
288
-
289
- We obtained document and paragraph level annotations with the help of Amazon Mechanical Turk workers. The workers first verified if the target entity we provide is indeed the main entity in the document. Then, they rated each paragraph in a document that contained a direct mention or a reference to the target
290
- entity. Last, they rated the sentiment towards the entity based on the entire document. In both cases, the workers made assessments about the authors view based on what they said about the target entity. For both paragraph and document level sentiment, the workers chose from five rating categories: Negative,
291
- Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the fine-grained annotations to obtain three coarse-grained classes Negative, Neutral, or Positive.
292
-
293
- #### Who are the annotators?
294
-
295
- [More Information Needed]
296
-
297
- ### Personal and Sensitive Information
298
-
299
- [More Information Needed]
300
-
301
- ## Considerations for Using the Data
302
-
303
- [More Information Needed]
304
-
305
- ### Social Impact of Dataset
306
-
307
- [More Information Needed]
308
-
309
- ### Discussion of Biases
310
-
311
- [More Information Needed]
312
-
313
- ### Other Known Limitations
314
-
315
- [More Information Needed]
316
-
317
- ## Additional Information
318
-
319
- ### Dataset Curators
320
-
321
- [More Information Needed]
322
-
323
- ### Licensing Information
324
-
325
- [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
326
-
327
- ### Citation Information
328
-
329
- ```
330
- @inproceedings{bastan2020authors,
331
- title={Author's Sentiment Prediction},
332
- author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
333
- year={2020},
334
- eprint={2011.06128},
335
- archivePrefix={arXiv},
336
- primaryClass={cs.CL}
337
- }
338
- ```
339
-
340
- ### Contributions
341
-
342
- Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities.\n\nThe dataset consists of sentiment annotations on news articles about people. For each article, annotators judge what the author\u2019s sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article.\n\nTo split the dataset, entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard -- `test_random`), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent -- `test_fixed`).\n", "citation": "@inproceedings{bastan2020authors,\n title={Author's Sentiment Prediction},\n author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},\n year={2020},\n eprint={2011.06128},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}\n", "homepage": "https://stonybrooknlp.github.io/PerSenT", "license": "Creative Commons Attribution 4.0 International License", "features": {"DOCUMENT_INDEX": {"dtype": "int64", "id": null, "_type": "Value"}, "TITLE": {"dtype": "string", "id": null, "_type": "Value"}, "TARGET_ENTITY": {"dtype": "string", "id": null, "_type": "Value"}, "DOCUMENT": {"dtype": "string", "id": null, "_type": "Value"}, "MASKED_DOCUMENT": {"dtype": "string", "id": null, "_type": "Value"}, "TRUE_SENTIMENT": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph0": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph1": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph2": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph3": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph4": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph5": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph6": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph7": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph8": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph9": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph10": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph11": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph12": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph13": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph14": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Paragraph15": {"num_classes": 3, "names": ["Negative", "Neutral", "Positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "per_sent", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 14595163, "num_examples": 3355, "dataset_name": "per_sent"}, "test_random": {"name": "test_random", "num_bytes": 2629500, "num_examples": 579, "dataset_name": "per_sent"}, "test_fixed": {"name": "test_fixed", "num_bytes": 3881800, "num_examples": 827, "dataset_name": "per_sent"}, "validation": {"name": "validation", "num_bytes": 2322922, "num_examples": 578, "dataset_name": "per_sent"}}, "download_checksums": {"https://raw.githubusercontent.com/MHDBST/PerSenT/main/train.csv": {"num_bytes": 14397450, "checksum": "7fbca893d6c29e937dbf7d445cfec2a86de10977baf20de5ac0468994245cede"}, "https://raw.githubusercontent.com/MHDBST/PerSenT/main/dev.csv": {"num_bytes": 2289404, "checksum": "9db2929ff199b6beff0d4484b1e566808a6abc6648f46467dabdac65cb7f4887"}, "https://raw.githubusercontent.com/MHDBST/PerSenT/main/fixed_test.csv": {"num_bytes": 3833535, "checksum": "344c086ec880f9b2bca107d9592535dcab7065f25c5ba7a1f6520328fe9c7962"}, "https://raw.githubusercontent.com/MHDBST/PerSenT/main/random_test.csv": {"num_bytes": 2596807, "checksum": "b1681bd9c4e1ae5b87e0ecc90cfa08086669ba400bde64adaa43814465528ac7"}}, "download_size": 23117196, "post_processing_size": null, "dataset_size": 23429385, "size_in_bytes": 46546581}}
 
 
default/per_sent-test_fixed.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a587e9aab0dcf1cee50255e39b7dda5b46e1872f11a96db543a2c9e2e8e82410
3
+ size 2265063
default/per_sent-test_random.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0075430fdc9caaf37a4c8e7864baba005da2600c007710613f44934d5d0089dd
3
+ size 1600016
default/per_sent-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3579b4167d93bc02d8b41837b3ba3b3eecf27d54803e167ef304bd3055d80925
3
+ size 8843914
default/per_sent-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce6fe537cc06047d6b1c81cf04ea25a5dea93e168c110394e4d92515f4154278
3
+ size 1411720
per_sent.py DELETED
@@ -1,149 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """ **Person SenTiment, a challenge dataset for author sentiment prediction in the news domain **
16
-
17
- PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities.
18
-
19
- """
20
-
21
-
22
- import csv
23
-
24
- import datasets
25
- from datasets.splits import NamedSplit
26
-
27
-
28
- # TODO: Add BibTeX citation
29
- # Find for instance the citation on arxiv or on the dataset repo/website
30
- _CITATION = """\
31
- @inproceedings{bastan2020authors,
32
- title={Author's Sentiment Prediction},
33
- author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
34
- year={2020},
35
- eprint={2011.06128},
36
- archivePrefix={arXiv},
37
- primaryClass={cs.CL}
38
- }
39
- """
40
-
41
- _DESCRIPTION = """\
42
- Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities.
43
-
44
- The dataset consists of sentiment annotations on news articles about people. For each article, annotators judge what the author’s sentiment is towards the main (target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
45
-
46
- To split the dataset, entities into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, we moved them to a separate test collection. We split the remaining into a training, dev, and test sets at random. Thus our collection includes one standard test set consisting of articles drawn at random (Test Standard -- `test_random`), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent -- `test_fixed`).
47
- """
48
-
49
- _LICENSE = "Creative Commons Attribution 4.0 International License"
50
-
51
- _URLs = {
52
- "train": "https://raw.githubusercontent.com/MHDBST/PerSenT/main/train.csv",
53
- "dev": "https://raw.githubusercontent.com/MHDBST/PerSenT/main/dev.csv",
54
- "test_random": "https://raw.githubusercontent.com/MHDBST/PerSenT/main/random_test.csv",
55
- "test_fixed": "https://raw.githubusercontent.com/MHDBST/PerSenT/main/fixed_test.csv",
56
- }
57
-
58
-
59
- class PerSent(datasets.GeneratorBasedBuilder):
60
- """Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities."""
61
-
62
- VERSION = datasets.Version("1.1.0")
63
- LABELS = ["Negative", "Neutral", "Positive"]
64
- LABEL_COLS = ["TRUE_SENTIMENT"] + ["Paragraph" + str(i) for i in range(16)]
65
-
66
- def _info(self):
67
- label = datasets.features.ClassLabel(names=self.LABELS)
68
- feature_dict = {
69
- "DOCUMENT_INDEX": datasets.Value("int64"),
70
- "TITLE": datasets.Value("string"),
71
- "TARGET_ENTITY": datasets.Value("string"),
72
- "DOCUMENT": datasets.Value("string"),
73
- "MASKED_DOCUMENT": datasets.Value("string"),
74
- }
75
- feature_dict.update({k: label for k in self.LABEL_COLS})
76
-
77
- return datasets.DatasetInfo(
78
- description=_DESCRIPTION,
79
- features=datasets.Features(feature_dict),
80
- supervised_keys=None,
81
- homepage="https://stonybrooknlp.github.io/PerSenT",
82
- license=_LICENSE,
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- """Returns SplitGenerators."""
88
- train_path = dl_manager.download(_URLs["train"])
89
- dev_path = dl_manager.download(_URLs["dev"])
90
- test_fixed_path = dl_manager.download(_URLs["test_fixed"])
91
- test_random_path = dl_manager.download(_URLs["test_random"])
92
-
93
- return [
94
- datasets.SplitGenerator(
95
- name=datasets.Split.TRAIN,
96
- # These kwargs will be passed to _generate_examples
97
- gen_kwargs={
98
- "filepath": train_path,
99
- "split": "train",
100
- },
101
- ),
102
- datasets.SplitGenerator(
103
- name=NamedSplit("test_random"),
104
- # These kwargs will be passed to _generate_examples
105
- gen_kwargs={"filepath": test_random_path, "split": "test_random"},
106
- ),
107
- datasets.SplitGenerator(
108
- name=NamedSplit("test_fixed"),
109
- # These kwargs will be passed to _generate_examples
110
- gen_kwargs={"filepath": test_fixed_path, "split": "test_fixed"},
111
- ),
112
- datasets.SplitGenerator(
113
- name=datasets.Split.VALIDATION,
114
- # These kwargs will be passed to _generate_examples
115
- gen_kwargs={
116
- "filepath": dev_path,
117
- "split": "dev",
118
- },
119
- ),
120
- ]
121
-
122
- def _generate_examples(self, filepath, split):
123
- """Yields examples.
124
-
125
- For examples with missing labels (empty strings in the original files), we replace with -1.
126
- """
127
-
128
- with open(filepath, encoding="utf-8") as f:
129
- reader = csv.reader(f)
130
-
131
- # Header
132
- _ = next(reader)
133
-
134
- for id_, row in enumerate(reader):
135
- doc_idx, title, target, doc, masked_doc, *labels = row
136
-
137
- # Replace missing labels with -1
138
- labels = [label if label in self.LABELS else -1 for label in labels]
139
-
140
- example = {
141
- "DOCUMENT_INDEX": doc_idx,
142
- "TITLE": title,
143
- "TARGET_ENTITY": target,
144
- "DOCUMENT": doc,
145
- "MASKED_DOCUMENT": masked_doc,
146
- }
147
- example.update(dict(zip(self.LABEL_COLS, labels)))
148
-
149
- yield id_, example