dianalogan commited on
Commit
3d51e4f
β€’
1 Parent(s): e50c23d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +554 -1
README.md CHANGED
@@ -1,3 +1,556 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ - 10K<n<100K
15
+ - 1K<n<10K
16
+ - n<1K
17
+ source_datasets:
18
+ - extended|other-tweet-datasets
19
+ task_categories:
20
+ - text-classification
21
+ task_ids:
22
+ - intent-classification
23
+ - multi-class-classification
24
+ - sentiment-classification
25
+ paperswithcode_id: tweeteval
26
+ pretty_name: TweetEval
27
+ train-eval-index:
28
+ - config: emotion
29
+ task: text-classification
30
+ task_id: multi_class_classification
31
+ splits:
32
+ train_split: train
33
+ eval_split: test
34
+ col_mapping:
35
+ text: text
36
+ label: target
37
+ metrics:
38
+ - type: accuracy
39
+ name: Accuracy
40
+ - type: f1
41
+ name: F1 macro
42
+ args:
43
+ average: macro
44
+ - type: f1
45
+ name: F1 micro
46
+ args:
47
+ average: micro
48
+ - type: f1
49
+ name: F1 weighted
50
+ args:
51
+ average: weighted
52
+ - type: precision
53
+ name: Precision macro
54
+ args:
55
+ average: macro
56
+ - type: precision
57
+ name: Precision micro
58
+ args:
59
+ average: micro
60
+ - type: precision
61
+ name: Precision weighted
62
+ args:
63
+ average: weighted
64
+ - type: recall
65
+ name: Recall macro
66
+ args:
67
+ average: macro
68
+ - type: recall
69
+ name: Recall micro
70
+ args:
71
+ average: micro
72
+ - type: recall
73
+ name: Recall weighted
74
+ args:
75
+ average: weighted
76
+ - config: hate
77
+ task: text-classification
78
+ task_id: binary_classification
79
+ splits:
80
+ train_split: train
81
+ eval_split: test
82
+ col_mapping:
83
+ text: text
84
+ label: target
85
+ metrics:
86
+ - type: accuracy
87
+ name: Accuracy
88
+ - type: f1
89
+ name: F1 binary
90
+ args:
91
+ average: binary
92
+ - type: precision
93
+ name: Precision macro
94
+ args:
95
+ average: macro
96
+ - type: precision
97
+ name: Precision micro
98
+ args:
99
+ average: micro
100
+ - type: precision
101
+ name: Precision weighted
102
+ args:
103
+ average: weighted
104
+ - type: recall
105
+ name: Recall macro
106
+ args:
107
+ average: macro
108
+ - type: recall
109
+ name: Recall micro
110
+ args:
111
+ average: micro
112
+ - type: recall
113
+ name: Recall weighted
114
+ args:
115
+ average: weighted
116
+ - config: irony
117
+ task: text-classification
118
+ task_id: binary_classification
119
+ splits:
120
+ train_split: train
121
+ eval_split: test
122
+ col_mapping:
123
+ text: text
124
+ label: target
125
+ metrics:
126
+ - type: accuracy
127
+ name: Accuracy
128
+ - type: f1
129
+ name: F1 binary
130
+ args:
131
+ average: binary
132
+ - type: precision
133
+ name: Precision macro
134
+ args:
135
+ average: macro
136
+ - type: precision
137
+ name: Precision micro
138
+ args:
139
+ average: micro
140
+ - type: precision
141
+ name: Precision weighted
142
+ args:
143
+ average: weighted
144
+ - type: recall
145
+ name: Recall macro
146
+ args:
147
+ average: macro
148
+ - type: recall
149
+ name: Recall micro
150
+ args:
151
+ average: micro
152
+ - type: recall
153
+ name: Recall weighted
154
+ args:
155
+ average: weighted
156
+ - config: offensive
157
+ task: text-classification
158
+ task_id: binary_classification
159
+ splits:
160
+ train_split: train
161
+ eval_split: test
162
+ col_mapping:
163
+ text: text
164
+ label: target
165
+ metrics:
166
+ - type: accuracy
167
+ name: Accuracy
168
+ - type: f1
169
+ name: F1 binary
170
+ args:
171
+ average: binary
172
+ - type: precision
173
+ name: Precision macro
174
+ args:
175
+ average: macro
176
+ - type: precision
177
+ name: Precision micro
178
+ args:
179
+ average: micro
180
+ - type: precision
181
+ name: Precision weighted
182
+ args:
183
+ average: weighted
184
+ - type: recall
185
+ name: Recall macro
186
+ args:
187
+ average: macro
188
+ - type: recall
189
+ name: Recall micro
190
+ args:
191
+ average: micro
192
+ - type: recall
193
+ name: Recall weighted
194
+ args:
195
+ average: weighted
196
+ - config: sentiment
197
+ task: text-classification
198
+ task_id: multi_class_classification
199
+ splits:
200
+ train_split: train
201
+ eval_split: test
202
+ col_mapping:
203
+ text: text
204
+ label: target
205
+ metrics:
206
+ - type: accuracy
207
+ name: Accuracy
208
+ - type: f1
209
+ name: F1 macro
210
+ args:
211
+ average: macro
212
+ - type: f1
213
+ name: F1 micro
214
+ args:
215
+ average: micro
216
+ - type: f1
217
+ name: F1 weighted
218
+ args:
219
+ average: weighted
220
+ - type: precision
221
+ name: Precision macro
222
+ args:
223
+ average: macro
224
+ - type: precision
225
+ name: Precision micro
226
+ args:
227
+ average: micro
228
+ - type: precision
229
+ name: Precision weighted
230
+ args:
231
+ average: weighted
232
+ - type: recall
233
+ name: Recall macro
234
+ args:
235
+ average: macro
236
+ - type: recall
237
+ name: Recall micro
238
+ args:
239
+ average: micro
240
+ - type: recall
241
+ name: Recall weighted
242
+ args:
243
+ average: weighted
244
+ configs:
245
+ - emoji
246
+ - emotion
247
+ - hate
248
+ - irony
249
+ - offensive
250
+ - sentiment
251
+ - stance_abortion
252
+ - stance_atheism
253
+ - stance_climate
254
+ - stance_feminist
255
+ - stance_hillary
256
  ---
257
+ # Dataset Card for tweet_eval
258
+ ## Table of Contents
259
+ - [Dataset Description](#dataset-description)
260
+ - [Dataset Summary](#dataset-summary)
261
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
262
+ - [Languages](#languages)
263
+ - [Dataset Structure](#dataset-structure)
264
+ - [Data Instances](#data-instances)
265
+ - [Data Fields](#data-fields)
266
+ - [Data Splits](#data-splits)
267
+ - [Dataset Creation](#dataset-creation)
268
+ - [Curation Rationale](#curation-rationale)
269
+ - [Source Data](#source-data)
270
+ - [Annotations](#annotations)
271
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
272
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
273
+ - [Social Impact of Dataset](#social-impact-of-dataset)
274
+ - [Discussion of Biases](#discussion-of-biases)
275
+ - [Other Known Limitations](#other-known-limitations)
276
+ - [Additional Information](#additional-information)
277
+ - [Dataset Curators](#dataset-curators)
278
+ - [Licensing Information](#licensing-information)
279
+ - [Citation Information](#citation-information)
280
+ - [Contributions](#contributions)
281
+ ## Dataset Description
282
+ - **Homepage:** [Needs More Information]
283
+ - **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval)
284
+ - **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf)
285
+ - **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval)
286
+ - **Point of Contact:** [Needs More Information]
287
+ ### Dataset Summary
288
+ TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
289
+ ### Supported Tasks and Leaderboards
290
+ - `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
291
+ ### Languages
292
+ The text in the dataset is in English, as spoken by Twitter users.
293
+ ## Dataset Structure
294
+ ### Data Instances
295
+ An instance from `emoji` config:
296
+ ```
297
+ {'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'}
298
+ ```
299
+ An instance from `emotion` config:
300
+ ```
301
+ {'label': 2, 'text': "β€œWorry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"}
302
+ ```
303
+ An instance from `hate` config:
304
+ ```
305
+ {'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'}
306
+ ```
307
+ An instance from `irony` config:
308
+ ```
309
+ {'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'}
310
+ ```
311
+ An instance from `offensive` config:
312
+ ```
313
+ {'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'}
314
+ ```
315
+ An instance from `sentiment` config:
316
+ ```
317
+ {'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
318
+ ```
319
+ An instance from `stance_abortion` config:
320
+ ```
321
+ {'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'}
322
+ ```
323
+ An instance from `stance_atheism` config:
324
+ ```
325
+ {'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'}
326
+ ```
327
+ An instance from `stance_climate` config:
328
+ ```
329
+ {'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'}
330
+ ```
331
+ An instance from `stance_feminist` config:
332
+ ```
333
+ {'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"}
334
+ ```
335
+ An instance from `stance_hillary` config:
336
+ ```
337
+ {'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"}
338
+ ```
339
+ ### Data Fields
340
+ For `emoji` config:
341
+ - `text`: a `string` feature containing the tweet.
342
+ - `label`: an `int` classification label with the following mapping:
343
+ `0`: ❀
344
+ `1`: 😍
345
+ `2`: πŸ˜‚
346
+ `3`: πŸ’•
347
+ `4`: πŸ”₯
348
+ `5`: 😊
349
+ `6`: 😎
350
+ `7`: ✨
351
+ `8`: πŸ’™
352
+ `9`: 😘
353
+ `10`: πŸ“·
354
+ `11`: πŸ‡ΊπŸ‡Έ
355
+ `12`: β˜€
356
+ `13`: πŸ’œ
357
+ `14`: πŸ˜‰
358
+ `15`: πŸ’―
359
+ `16`: 😁
360
+ `17`: πŸŽ„
361
+ `18`: πŸ“Έ
362
+ `19`: 😜
363
+ For `emotion` config:
364
+ - `text`: a `string` feature containing the tweet.
365
+ - `label`: an `int` classification label with the following mapping:
366
+ `0`: anger
367
+ `1`: joy
368
+ `2`: optimism
369
+ `3`: sadness
370
+ For `hate` config:
371
+ - `text`: a `string` feature containing the tweet.
372
+ - `label`: an `int` classification label with the following mapping:
373
+ `0`: non-hate
374
+ `1`: hate
375
+ For `irony` config:
376
+ - `text`: a `string` feature containing the tweet.
377
+ - `label`: an `int` classification label with the following mapping:
378
+ `0`: non_irony
379
+ `1`: irony
380
+ For `offensive` config:
381
+ - `text`: a `string` feature containing the tweet.
382
+ - `label`: an `int` classification label with the following mapping:
383
+ `0`: non-offensive
384
+ `1`: offensive
385
+ For `sentiment` config:
386
+ - `text`: a `string` feature containing the tweet.
387
+ - `label`: an `int` classification label with the following mapping:
388
+ `0`: negative
389
+ `1`: neutral
390
+ `2`: positive
391
+ For `stance_abortion` config:
392
+ - `text`: a `string` feature containing the tweet.
393
+ - `label`: an `int` classification label with the following mapping:
394
+ `0`: none
395
+ `1`: against
396
+ `2`: favor
397
+ For `stance_atheism` config:
398
+ - `text`: a `string` feature containing the tweet.
399
+ - `label`: an `int` classification label with the following mapping:
400
+ `0`: none
401
+ `1`: against
402
+ `2`: favor
403
+ For `stance_climate` config:
404
+ - `text`: a `string` feature containing the tweet.
405
+ - `label`: an `int` classification label with the following mapping:
406
+ `0`: none
407
+ `1`: against
408
+ `2`: favor
409
+ For `stance_feminist` config:
410
+ - `text`: a `string` feature containing the tweet.
411
+ - `label`: an `int` classification label with the following mapping:
412
+ `0`: none
413
+ `1`: against
414
+ `2`: favor
415
+ For `stance_hillary` config:
416
+ - `text`: a `string` feature containing the tweet.
417
+ - `label`: an `int` classification label with the following mapping:
418
+ `0`: none
419
+ `1`: against
420
+ `2`: favor
421
+ ### Data Splits
422
+ | name | train | validation | test |
423
+ | --------------- | ----- | ---------- | ----- |
424
+ | emoji | 45000 | 5000 | 50000 |
425
+ | emotion | 3257 | 374 | 1421 |
426
+ | hate | 9000 | 1000 | 2970 |
427
+ | irony | 2862 | 955 | 784 |
428
+ | offensive | 11916 | 1324 | 860 |
429
+ | sentiment | 45615 | 2000 | 12284 |
430
+ | stance_abortion | 587 | 66 | 280 |
431
+ | stance_atheism | 461 | 52 | 220 |
432
+ | stance_climate | 355 | 40 | 169 |
433
+ | stance_feminist | 597 | 67 | 285 |
434
+ | stance_hillary | 620 | 69 | 295 |
435
+ ## Dataset Creation
436
+ ### Curation Rationale
437
+ [Needs More Information]
438
+ ### Source Data
439
+ #### Initial Data Collection and Normalization
440
+ [Needs More Information]
441
+ #### Who are the source language producers?
442
+ [Needs More Information]
443
+ ### Annotations
444
+ #### Annotation process
445
+ [Needs More Information]
446
+ #### Who are the annotators?
447
+ [Needs More Information]
448
+ ### Personal and Sensitive Information
449
+ [Needs More Information]
450
+ ## Considerations for Using the Data
451
+ ### Social Impact of Dataset
452
+ [Needs More Information]
453
+ ### Discussion of Biases
454
+ [Needs More Information]
455
+ ### Other Known Limitations
456
+ [Needs More Information]
457
+ ## Additional Information
458
+ ### Dataset Curators
459
+ Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
460
+ ### Licensing Information
461
+ This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions).
462
+ All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
463
+ Additionally the license are:
464
+ - emoji: Undefined
465
+ - emotion(EmoInt): Undefined
466
+ - hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html)
467
+ - irony: Undefined
468
+ - Offensive: Undefined
469
+ - Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ)
470
+ - Stance: Undefined
471
+ ### Citation Information
472
+ ```
473
+ @inproceedings{barbieri2020tweeteval,
474
+ title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
475
+ author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
476
+ booktitle={Proceedings of Findings of EMNLP},
477
+ year={2020}
478
+ }
479
+ ```
480
+ If you use any of the TweetEval datasets, please cite their original publications:
481
+ #### Emotion Recognition:
482
+ ```
483
+ @inproceedings{mohammad2018semeval,
484
+ title={Semeval-2018 task 1: Affect in tweets},
485
+ author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
486
+ booktitle={Proceedings of the 12th international workshop on semantic evaluation},
487
+ pages={1--17},
488
+ year={2018}
489
+ }
490
+ ```
491
+ #### Emoji Prediction:
492
+ ```
493
+ @inproceedings{barbieri2018semeval,
494
+ title={Semeval 2018 task 2: Multilingual emoji prediction},
495
+ author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
496
+ Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
497
+ booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
498
+ pages={24--33},
499
+ year={2018}
500
+ }
501
+ ```
502
+ #### Irony Detection:
503
+ ```
504
+ @inproceedings{van2018semeval,
505
+ title={Semeval-2018 task 3: Irony detection in english tweets},
506
+ author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
507
+ booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
508
+ pages={39--50},
509
+ year={2018}
510
+ }
511
+ ```
512
+ #### Hate Speech Detection:
513
+ ```
514
+ @inproceedings{basile-etal-2019-semeval,
515
+ title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter",
516
+ author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and
517
+ Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela",
518
+ booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
519
+ year = "2019",
520
+ address = "Minneapolis, Minnesota, USA",
521
+ publisher = "Association for Computational Linguistics",
522
+ url = "https://www.aclweb.org/anthology/S19-2007",
523
+ doi = "10.18653/v1/S19-2007",
524
+ pages = "54--63"
525
+ }
526
+ ```
527
+ #### Offensive Language Identification:
528
+ ```
529
+ @inproceedings{zampieri2019semeval,
530
+ title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)},
531
+ author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh},
532
+ booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation},
533
+ pages={75--86},
534
+ year={2019}
535
+ }
536
+ ```
537
+ #### Sentiment Analysis:
538
+ ```
539
+ @inproceedings{rosenthal2017semeval,
540
+ title={SemEval-2017 task 4: Sentiment analysis in Twitter},
541
+ author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav},
542
+ booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)},
543
+ pages={502--518},
544
+ year={2017}
545
+ }
546
+ ```
547
+ #### Stance Detection:
548
+ ```
549
+ @inproceedings{mohammad2016semeval,
550
+ title={Semeval-2016 task 6: Detecting stance in tweets},
551
+ author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin},
552
+ booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)},
553
+ pages={31--41},
554
+ year={2016}
555
+ }
556
+ ```