File size: 13,809 Bytes
bac6509
 
 
 
 
da87a42
bac6509
da87a42
bac6509
 
 
 
 
 
 
 
 
 
 
9515447
1abc6d4
4ab1b02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
9fefc07
 
 
4ab1b02
 
 
 
b78b02e
 
 
 
 
 
4ab1b02
 
 
 
 
bac6509
 
 
 
 
9515447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bac6509
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4dff0b2
 
 
4ab1b02
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-MPQA-KBP Challenge-MediaRank
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: persent
pretty_name: PerSenT
dataset_info:
  features:
  - name: DOCUMENT_INDEX
    dtype: int64
  - name: TITLE
    dtype: string
  - name: TARGET_ENTITY
    dtype: string
  - name: DOCUMENT
    dtype: string
  - name: MASKED_DOCUMENT
    dtype: string
  - name: TRUE_SENTIMENT
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph0
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph1
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph2
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph3
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph4
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph5
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph6
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph7
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph8
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph9
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph10
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph11
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph12
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph13
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph14
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  - name: Paragraph15
    dtype:
      class_label:
        names:
          '0': Negative
          '1': Neutral
          '2': Positive
  splits:
  - name: train
    num_bytes: 14595163
    num_examples: 3355
  - name: test_random
    num_bytes: 2629500
    num_examples: 579
  - name: test_fixed
    num_bytes: 3881800
    num_examples: 827
  - name: validation
    num_bytes: 2322922
    num_examples: 578
  download_size: 23117196
  dataset_size: 23429385
---

# Dataset Card for PerSenT

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [PerSenT](https://stonybrooknlp.github.io/PerSenT/)
- **Repository:** [https://github.com/MHDBST/PerSenT](https://github.com/MHDBST/PerSenT)
- **Paper:** [arXiv](https://arxiv.org/abs/2011.06128)
- **Leaderboard:** NA
- **Point of Contact:** [Mohaddeseh Bastan](mbastan@cs.stonybrook.edu)

### Dataset Summary

PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities.  For each article, annotators judge what the author’s sentiment is towards the main
(target) entity of the article. The annotations also include similar judgments on paragraphs within the article.

### Supported Tasks and Leaderboards

Sentiment Classification: Each document consists of multiple paragraphs.  Each paragraph is labeled separately (Positive, Neutral, Negative) and the author’s sentiment towards the whole document is included as a document-level label.

### Languages

English

## Dataset Structure

### Data Instances

```json
{'DOCUMENT': "Germany's Landesbank Baden Wuertemberg won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n The bank was several state-owned German institutions to run into trouble last year after it ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of the bank are also being investigated by German authorities for risking or damaging the bank's capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of the bank and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that the bank would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from the bank's shareholders  all of them public authorities or state-owned  including the state of Baden-Wuerttemberg  the region's savings bank association and the city of Stuttgart.",
 'DOCUMENT_INDEX': 1,
 'MASKED_DOCUMENT': "[TGT] won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n [TGT] was several state-owned German institutions to run into trouble last year after [TGT] ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of [TGT] are also being investigated by German authorities for risking or damaging [TGT]'s capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of [TGT] and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that [TGT] would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from [TGT]'s shareholders  all of them public authorities or state-owned  including the state of Baden-Wuerttemberg  the region's savings bank association and the city of Stuttgart.",
 'Paragraph0': 2,
 'Paragraph1': 0,
 'Paragraph10': -1,
 'Paragraph11': -1,
 'Paragraph12': -1,
 'Paragraph13': -1,
 'Paragraph14': -1,
 'Paragraph15': -1,
 'Paragraph2': 0,
 'Paragraph3': 1,
 'Paragraph4': 1,
 'Paragraph5': -1,
 'Paragraph6': -1,
 'Paragraph7': -1,
 'Paragraph8': -1,
 'Paragraph9': -1,
 'TARGET_ENTITY': 'Landesbank Baden Wuertemberg',
 'TITLE': 'German bank LBBW wins EU bailout approval',
 'TRUE_SENTIMENT': 0}
```

### Data Fields

- DOCUMENT_INDEX: ID of the document per original dataset
- TITLE: Title of the article
- DOCUMENT: Text of the article
- MASKED_DOCUMENT: Text of the article with the target entity masked with `[TGT]` token
- TARGET_ENTITY: The entity that the author is expressing opinion about
- TRUE_SENTIMENT: Label for entire article
- Paragraph{0..15}: Label for each paragraph in the article

**Note**: Labels are one of `[Negative, Neutral, Positive]`.  Missing labels were replaced with `-1`.

### Data Splits

To split the dataset, entities were split into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, these were moved them to a separate test collection. The remaining was split into a training, dev, and test sets at random. Thus the collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent).

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

Articles were selected from 3 sources:
1. MPQA (Deng and Wiebe, 2015; Wiebe et al., 2005): This dataset contains news articles manually annotated for opinions, beliefs, emotions, sentiments, speculations, etc. It also has target annotations which are entities and event anchored to the heads of noun or verb phrases. All decisions on this dataset are made on sentence-level and over short spans.
2. KBP Challenge (Ellis et al., 2014): This resource contains TAC 2014 KBP English sentiment slot filling challenge dataset. This is a document-level sentiment filling dataset. In this task, given an entity and a sentiment (positive/negative) from the document, the goal is to find entities toward which
the original entity holds the given sentimental view. We selected documents from this resource which have been used in the following similar work in sentiment analysis task (Choi et al., 2016).
3. Media Rank (Ye and Skiena, 2019): This dataset ranks about 50k news sources along different aspects. It is also used for classifying political ideology of news articles (Kulkarni et al., 2018).

Pre-processing steps:
- First we find all the person entities in each article, using Stanford NER (Name Entity Resolution) tagger (Finkel et al., 2005) and all mentions of them using co-reference resolution (Clark and Manning, 2016; Co, 2017). 
- We removed articles which are not likely to have a main entity of focus. We used a simple heuristic of removing articles in which the most frequent person entity is mentioned only three times or less (even when counting co-referent mentions).
- For the articles that remain we deemed the most frequent entity to be the main entity of the article. We also filtered out extremely long and extremely short articles to keep the articles which have at least 3 paragraphs and at most 16 paragraphs.

Documents are randomly separated into train, dev, and two test sets. We ensure that each entity appears in only one of the sets. Our goal here is to avoid easy to learn biases over entities. To avoid the most frequent entities from dominating the training or the test sets, we remove articles that covered the most frequent entities and use them as a separate test set (referred to as frequent test set) in addition to the randomly drawn standard test set.

### Annotations

#### Annotation process

We obtained document and paragraph level annotations with the help of Amazon Mechanical Turk workers. The workers first verified if the target entity we provide is indeed the main entity in the document. Then, they rated each paragraph in a document that contained a direct mention or a reference to the target
entity. Last, they rated the sentiment towards the entity based on the entire document. In both cases, the workers made assessments about the authors view based on what they said about the target entity. For both paragraph and document level sentiment, the workers chose from five rating categories: Negative,
Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the fine-grained annotations to obtain three coarse-grained classes Negative, Neutral, or Positive.

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

[More Information Needed]

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)

### Citation Information

```
  @inproceedings{bastan2020authors,
        title={Author's Sentiment Prediction}, 
        author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
        year={2020},
        eprint={2011.06128},
        archivePrefix={arXiv},
        primaryClass={cs.CL}
  }
```

### Contributions

Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.