Sasha Luccioni commited on
Commit
7c6472e
β€’
1 Parent(s): b7ea7b8

Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full (#4338)

Browse files

* Eval metadata for Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full

* Update README.md

oops, duplicate

* Update README.md

adding header

* Update README.md

adding header

* Update datasets/xsum/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update README.md

removing ROUGE args

* Update README.md

removing F1 params

* Update README.md

* Update README.md

updating to binary F1

* Update README.md

oops typo

* Update README.md

F1 binary

Co-authored-by: sashavor <sasha.luccioni@huggingface.co>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/bce8e6541af98cb0e06b24f9c96a6431cf813a38

Files changed (1) hide show
  1. README.md +218 -1
README.md CHANGED
@@ -89,6 +89,223 @@ task_ids:
89
  - multi-class-classification
90
  paperswithcode_id: tweeteval
91
  pretty_name: TweetEval
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
  ---
93
 
94
  # Dataset Card for tweet_eval
@@ -482,7 +699,7 @@ If you use any of the TweetEval datasets, please cite their original publication
482
  ```
483
  @inproceedings{barbieri2018semeval,
484
  title={Semeval 2018 task 2: Multilingual emoji prediction},
485
- author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
486
  Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
487
  booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
488
  pages={24--33},
89
  - multi-class-classification
90
  paperswithcode_id: tweeteval
91
  pretty_name: TweetEval
92
+ train-eval-index:
93
+ - config: emotion
94
+ task: text-classification
95
+ task_id: multi_class_classification
96
+ splits:
97
+ train_split: train
98
+ eval_split: test
99
+ col_mapping:
100
+ text: text
101
+ label: target
102
+ metrics:
103
+ - type: accuracy
104
+ name: Accuracy
105
+ - type: f1
106
+ name: F1 macro
107
+ args:
108
+ average: macro
109
+ - type: f1
110
+ name: F1 micro
111
+ args:
112
+ average: micro
113
+ - type: f1
114
+ name: F1 weighted
115
+ args:
116
+ average: weighted
117
+ - type: precision
118
+ name: Precision macro
119
+ args:
120
+ average: macro
121
+ - type: precision
122
+ name: Precision micro
123
+ args:
124
+ average: micro
125
+ - type: precision
126
+ name: Precision weighted
127
+ args:
128
+ average: weighted
129
+ - type: recall
130
+ name: Recall macro
131
+ args:
132
+ average: macro
133
+ - type: recall
134
+ name: Recall micro
135
+ args:
136
+ average: micro
137
+ - type: recall
138
+ name: Recall weighted
139
+ args:
140
+ average: weighted
141
+ - config: hate
142
+ task: text-classification
143
+ task_id: binary_classification
144
+ splits:
145
+ train_split: train
146
+ eval_split: test
147
+ col_mapping:
148
+ text: text
149
+ label: target
150
+ metrics:
151
+ - type: accuracy
152
+ name: Accuracy
153
+ - type: f1
154
+ name: F1 binary
155
+ args:
156
+ average: binary
157
+ - type: precision
158
+ name: Precision macro
159
+ args:
160
+ average: macro
161
+ - type: precision
162
+ name: Precision micro
163
+ args:
164
+ average: micro
165
+ - type: precision
166
+ name: Precision weighted
167
+ args:
168
+ average: weighted
169
+ - type: recall
170
+ name: Recall macro
171
+ args:
172
+ average: macro
173
+ - type: recall
174
+ name: Recall micro
175
+ args:
176
+ average: micro
177
+ - type: recall
178
+ name: Recall weighted
179
+ args:
180
+ average: weighted
181
+ - config: irony
182
+ task: text-classification
183
+ task_id: binary_classification
184
+ splits:
185
+ train_split: train
186
+ eval_split: test
187
+ col_mapping:
188
+ text: text
189
+ label: target
190
+ metrics:
191
+ - type: accuracy
192
+ name: Accuracy
193
+ - type: f1
194
+ name: F1 binary
195
+ args:
196
+ average: binary
197
+ - type: precision
198
+ name: Precision macro
199
+ args:
200
+ average: macro
201
+ - type: precision
202
+ name: Precision micro
203
+ args:
204
+ average: micro
205
+ - type: precision
206
+ name: Precision weighted
207
+ args:
208
+ average: weighted
209
+ - type: recall
210
+ name: Recall macro
211
+ args:
212
+ average: macro
213
+ - type: recall
214
+ name: Recall micro
215
+ args:
216
+ average: micro
217
+ - type: recall
218
+ name: Recall weighted
219
+ args:
220
+ average: weighted
221
+ - config: offensive
222
+ task: text-classification
223
+ task_id: binary_classification
224
+ splits:
225
+ train_split: train
226
+ eval_split: test
227
+ col_mapping:
228
+ text: text
229
+ label: target
230
+ metrics:
231
+ - type: accuracy
232
+ name: Accuracy
233
+ - type: f1
234
+ name: F1 binary
235
+ args:
236
+ average: binary
237
+ - type: precision
238
+ name: Precision macro
239
+ args:
240
+ average: macro
241
+ - type: precision
242
+ name: Precision micro
243
+ args:
244
+ average: micro
245
+ - type: precision
246
+ name: Precision weighted
247
+ args:
248
+ average: weighted
249
+ - type: recall
250
+ name: Recall macro
251
+ args:
252
+ average: macro
253
+ - type: recall
254
+ name: Recall micro
255
+ args:
256
+ average: micro
257
+ - type: recall
258
+ name: Recall weighted
259
+ args:
260
+ average: weighted
261
+ - config: sentiment
262
+ task: text-classification
263
+ task_id: multi_class_classification
264
+ splits:
265
+ train_split: train
266
+ eval_split: test
267
+ col_mapping:
268
+ text: text
269
+ label: target
270
+ metrics:
271
+ - type: accuracy
272
+ name: Accuracy
273
+ - type: f1
274
+ name: F1 macro
275
+ args:
276
+ average: macro
277
+ - type: f1
278
+ name: F1 micro
279
+ args:
280
+ average: micro
281
+ - type: f1
282
+ name: F1 weighted
283
+ args:
284
+ average: weighted
285
+ - type: precision
286
+ name: Precision macro
287
+ args:
288
+ average: macro
289
+ - type: precision
290
+ name: Precision micro
291
+ args:
292
+ average: micro
293
+ - type: precision
294
+ name: Precision weighted
295
+ args:
296
+ average: weighted
297
+ - type: recall
298
+ name: Recall macro
299
+ args:
300
+ average: macro
301
+ - type: recall
302
+ name: Recall micro
303
+ args:
304
+ average: micro
305
+ - type: recall
306
+ name: Recall weighted
307
+ args:
308
+ average: weighted
309
  ---
310
 
311
  # Dataset Card for tweet_eval
699
  ```
700
  @inproceedings{barbieri2018semeval,
701
  title={Semeval 2018 task 2: Multilingual emoji prediction},
702
+ author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
703
  Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
704
  booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
705
  pages={24--33},